From patchwork Wed Jun 12 22:30:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13695635 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D95B4C27C75 for ; Wed, 12 Jun 2024 22:31:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=1KwIPscuFhFDNhs/0V6voJDehNrBECl032w0VbTczsU=; b=0pXwT38Gjk6bmd2ABLktU745Gb /f2R/kwW8CsXsr3EQRHS8AlRjdHXKtnWmMHFjshsDrYW74yAROX6HqAbUu084DmUoajudJfeEm882 6mQb+DjTObhgAPW80G8NyGXsfoD7j7kedUMU1wqwDSXwyGgA6w7tyPDEG7hjT9uEfMlpkYiU0gr+d 3Z6GKhmC+SM+FrlseXD9u5dp/JdS+Z4He/juSjEXctPkmaYCQdkSCaRZ1hBmUXaBgAgDxZnPGIgc0 8dddHZDxDVTG9Qc6Yn0yJsimYAKBp2cBnWnooZHPC7NRHF5/fOygAPeLRcvcRZPjvn2UTR7Iir8qh t9Ad/ihQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sHWUu-0000000ELKZ-3nUJ; Wed, 12 Jun 2024 22:31:16 +0000 Received: from mail-qt1-x834.google.com ([2607:f8b0:4864:20::834]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sHWUk-0000000ELFa-3F1L for linux-arm-kernel@lists.infradead.org; Wed, 12 Jun 2024 22:31:11 +0000 Received: by mail-qt1-x834.google.com with SMTP id d75a77b69052e-4404fae8c6aso1925291cf.3 for ; Wed, 12 Jun 2024 15:31:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718231463; x=1718836263; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=1KwIPscuFhFDNhs/0V6voJDehNrBECl032w0VbTczsU=; b=M+NNKHM+HfE9cId8jB8dgq085dQJ3hpKGIXNzPp+YerAX/wU2cwhCICISsbrlagBCZ xrg83VzfcJDehItBWkqBMvpDYrAGGEZzQMHFfhG1RxfrARQYI7m1rEKHBekT3l/TIsxY xKPlMLFnPE2owETK1VD8WdVSoZo9afDb1aV07F8R0F3Aft2erWiDOwxqB43HpKhFIVyr XDMVWDisL18imYzE+ug5pRjkRAFZQmgiG/e73/1Bq/0aEOY0w5vN9jUE7lm8Q7OBnQoq xTm/CorPxTa0w5UbggIZcT8waI7q69YhUQllpluZP3JchhbXsM71icJ4mvOcjJcqA+hK uFBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718231463; x=1718836263; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=1KwIPscuFhFDNhs/0V6voJDehNrBECl032w0VbTczsU=; b=bUpGoKVzikHRsRhg6axaG7keGvlOZY2SoPiYvXj0sszKKNWdxMIjZggEFc78yywW5C Nr83Q8GidvWTUA2KE2pXVsfMgS0ghtGeFHkiHe1kvXWW/WgbP3TF15sBOMg8ORleoO1l L1ottYoqWh9u663OGVr1oDIn3ie/Tx6vBttyENDWH03BmFuUxpGh16OJp2UUkgIMnZ2B 2gGX7RjqYwc91RwgS9+zDZBL8vRfLEu+5jq6cHP76Ryaa1NOWAQXj9Li2XGZtxuKNbxZ 2fTsDIKSjpFvb9tNudp1yfJ7OrpLL0T+a6LVefMaPsrPROkVov9U8WpPA2zaIvsjLlIB QC6Q== X-Forwarded-Encrypted: i=1; AJvYcCUIdheGrdvWpZ12WvuX3lQLb0CPL3CT7n5pQrdcV0F2m9KI6+pkmKZDvDuJIvbhw5j8T7YUnj3uDxNUixqa8I8rPC25EwcUKslAdvfemiBdR3F1Bp4= X-Gm-Message-State: AOJu0YyyL7BZ4xDXkw7xs9ltmsQXHaTMcgn5qkYVBeU7nb27SmgaNLFA e5oC5E98GKlgz2TmXeO44kZhv6uTME5Jsa+16i+iN8p359pso6KY X-Google-Smtp-Source: AGHT+IEvF3nzKGWRcFT8p0YCP3KXrSwBU9strEazsCZAk7/Kh7AM8AApDAtUAiQ5kaenNDSdi4Cg5g== X-Received: by 2002:a05:622a:19a7:b0:43a:f9ea:1fbd with SMTP id d75a77b69052e-4415abe3a1emr41520181cf.18.1718231463157; Wed, 12 Jun 2024 15:31:03 -0700 (PDT) Received: from fauth1-smtp.messagingengine.com (fauth1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-441f2d6e933sm533231cf.44.2024.06.12.15.31.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 15:31:02 -0700 (PDT) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailfauth.nyi.internal (Postfix) with ESMTP id CF3E51200068; Wed, 12 Jun 2024 18:31:01 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Wed, 12 Jun 2024 18:31:01 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfeduhedguddtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveffieeujefhueeigfegueeh geeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe gsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdei gedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfih igmhgvrdhnrghmvg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 12 Jun 2024 18:31:00 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com Subject: [RFC 1/2] rust: Introduce atomic API helpers Date: Wed, 12 Jun 2024 15:30:24 -0700 Message-ID: <20240612223025.1158537-2-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240612223025.1158537-1-boqun.feng@gmail.com> References: <20240612223025.1158537-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240612_153106_838828_C5E468BE X-CRM114-Status: GOOD ( 17.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In order to support LKMM atomics in Rust, add rust_helper_* for atomic APIs. These helpers ensure the implementation of LKMM atomics in Rust is the same as in C. This could save the maintenance burden of having two similar atomic implementations in asm. Originally-by: Mark Rutland Signed-off-by: Boqun Feng --- rust/atomic_helpers.h | 1035 +++++++++++++++++++++ rust/helpers.c | 2 + scripts/atomic/gen-atomics.sh | 1 + scripts/atomic/gen-rust-atomic-helpers.sh | 64 ++ 4 files changed, 1102 insertions(+) create mode 100644 rust/atomic_helpers.h create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh diff --git a/rust/atomic_helpers.h b/rust/atomic_helpers.h new file mode 100644 index 000000000000..4b24eceef5fc --- /dev/null +++ b/rust/atomic_helpers.h @@ -0,0 +1,1035 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Generated by scripts/atomic/gen-rust-atomic-helpers.sh +// DO NOT MODIFY THIS FILE DIRECTLY + +/* + * This file provides helpers for the various atomic functions for Rust. + */ +#ifndef _RUST_ATOMIC_API_H +#define _RUST_ATOMIC_API_H + +#include + +__rust_helper int +rust_helper_atomic_read(const atomic_t *v) +{ + return atomic_read(v); +} + +__rust_helper int +rust_helper_atomic_read_acquire(const atomic_t *v) +{ + return atomic_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic_set(atomic_t *v, int i) +{ + atomic_set(v, i); +} + +__rust_helper void +rust_helper_atomic_set_release(atomic_t *v, int i) +{ + atomic_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic_add(int i, atomic_t *v) +{ + atomic_add(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return(int i, atomic_t *v) +{ + return atomic_add_return(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_acquire(int i, atomic_t *v) +{ + return atomic_add_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_release(int i, atomic_t *v) +{ + return atomic_add_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_relaxed(int i, atomic_t *v) +{ + return atomic_add_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add(int i, atomic_t *v) +{ + return atomic_fetch_add(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_acquire(int i, atomic_t *v) +{ + return atomic_fetch_add_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_release(int i, atomic_t *v) +{ + return atomic_fetch_add_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_sub(int i, atomic_t *v) +{ + atomic_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return(int i, atomic_t *v) +{ + return atomic_sub_return(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_acquire(int i, atomic_t *v) +{ + return atomic_sub_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_release(int i, atomic_t *v) +{ + return atomic_sub_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_relaxed(int i, atomic_t *v) +{ + return atomic_sub_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub(int i, atomic_t *v) +{ + return atomic_fetch_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_acquire(int i, atomic_t *v) +{ + return atomic_fetch_sub_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_release(int i, atomic_t *v) +{ + return atomic_fetch_sub_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_inc(atomic_t *v) +{ + atomic_inc(v); +} + +__rust_helper int +rust_helper_atomic_inc_return(atomic_t *v) +{ + return atomic_inc_return(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_acquire(atomic_t *v) +{ + return atomic_inc_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_release(atomic_t *v) +{ + return atomic_inc_return_release(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_relaxed(atomic_t *v) +{ + return atomic_inc_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc(atomic_t *v) +{ + return atomic_fetch_inc(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_acquire(atomic_t *v) +{ + return atomic_fetch_inc_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_release(atomic_t *v) +{ + return atomic_fetch_inc_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_relaxed(atomic_t *v) +{ + return atomic_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_dec(atomic_t *v) +{ + atomic_dec(v); +} + +__rust_helper int +rust_helper_atomic_dec_return(atomic_t *v) +{ + return atomic_dec_return(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_acquire(atomic_t *v) +{ + return atomic_dec_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_release(atomic_t *v) +{ + return atomic_dec_return_release(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_relaxed(atomic_t *v) +{ + return atomic_dec_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec(atomic_t *v) +{ + return atomic_fetch_dec(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_acquire(atomic_t *v) +{ + return atomic_fetch_dec_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_release(atomic_t *v) +{ + return atomic_fetch_dec_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_relaxed(atomic_t *v) +{ + return atomic_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_and(int i, atomic_t *v) +{ + atomic_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and(int i, atomic_t *v) +{ + return atomic_fetch_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_acquire(int i, atomic_t *v) +{ + return atomic_fetch_and_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_release(int i, atomic_t *v) +{ + return atomic_fetch_and_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_andnot(int i, atomic_t *v) +{ + atomic_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot(int i, atomic_t *v) +{ + return atomic_fetch_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + return atomic_fetch_andnot_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_release(int i, atomic_t *v) +{ + return atomic_fetch_andnot_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_or(int i, atomic_t *v) +{ + atomic_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or(int i, atomic_t *v) +{ + return atomic_fetch_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_acquire(int i, atomic_t *v) +{ + return atomic_fetch_or_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_release(int i, atomic_t *v) +{ + return atomic_fetch_or_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_xor(int i, atomic_t *v) +{ + atomic_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor(int i, atomic_t *v) +{ + return atomic_fetch_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_acquire(int i, atomic_t *v) +{ + return atomic_fetch_xor_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_release(int i, atomic_t *v) +{ + return atomic_fetch_xor_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_xor_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_xchg(atomic_t *v, int new) +{ + return atomic_xchg(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_acquire(atomic_t *v, int new) +{ + return atomic_xchg_acquire(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_release(atomic_t *v, int new) +{ + return atomic_xchg_release(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_relaxed(atomic_t *v, int new) +{ + return atomic_xchg_relaxed(v, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_acquire(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_release(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_release(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_sub_and_test(int i, atomic_t *v) +{ + return atomic_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic_dec_and_test(atomic_t *v) +{ + return atomic_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_inc_and_test(atomic_t *v) +{ + return atomic_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_add_negative(int i, atomic_t *v) +{ + return atomic_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_acquire(int i, atomic_t *v) +{ + return atomic_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_release(int i, atomic_t *v) +{ + return atomic_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_relaxed(int i, atomic_t *v) +{ + return atomic_add_negative_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_unless(atomic_t *v, int a, int u) +{ + return atomic_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_add_unless(atomic_t *v, int a, int u) +{ + return atomic_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_inc_not_zero(atomic_t *v) +{ + return atomic_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic_inc_unless_negative(atomic_t *v) +{ + return atomic_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic_dec_unless_positive(atomic_t *v) +{ + return atomic_dec_unless_positive(v); +} + +__rust_helper int +rust_helper_atomic_dec_if_positive(atomic_t *v) +{ + return atomic_dec_if_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_read(const atomic64_t *v) +{ + return atomic64_read(v); +} + +__rust_helper s64 +rust_helper_atomic64_read_acquire(const atomic64_t *v) +{ + return atomic64_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic64_set(atomic64_t *v, s64 i) +{ + atomic64_set(v, i); +} + +__rust_helper void +rust_helper_atomic64_set_release(atomic64_t *v, s64 i) +{ + atomic64_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic64_add(s64 i, atomic64_t *v) +{ + atomic64_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return(s64 i, atomic64_t *v) +{ + return atomic64_add_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_release(s64 i, atomic64_t *v) +{ + return atomic64_add_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_sub(s64 i, atomic64_t *v) +{ + atomic64_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return(s64 i, atomic64_t *v) +{ + return atomic64_sub_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_release(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_inc(atomic64_t *v) +{ + atomic64_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return(atomic64_t *v) +{ + return atomic64_inc_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_acquire(atomic64_t *v) +{ + return atomic64_inc_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_release(atomic64_t *v) +{ + return atomic64_inc_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_relaxed(atomic64_t *v) +{ + return atomic64_inc_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc(atomic64_t *v) +{ + return atomic64_fetch_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_acquire(atomic64_t *v) +{ + return atomic64_fetch_inc_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_release(atomic64_t *v) +{ + return atomic64_fetch_inc_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_relaxed(atomic64_t *v) +{ + return atomic64_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_dec(atomic64_t *v) +{ + atomic64_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return(atomic64_t *v) +{ + return atomic64_dec_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_acquire(atomic64_t *v) +{ + return atomic64_dec_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_release(atomic64_t *v) +{ + return atomic64_dec_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_relaxed(atomic64_t *v) +{ + return atomic64_dec_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec(atomic64_t *v) +{ + return atomic64_fetch_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_acquire(atomic64_t *v) +{ + return atomic64_fetch_dec_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_release(atomic64_t *v) +{ + return atomic64_fetch_dec_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_relaxed(atomic64_t *v) +{ + return atomic64_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_and(s64 i, atomic64_t *v) +{ + atomic64_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_andnot(s64 i, atomic64_t *v) +{ + atomic64_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_or(s64 i, atomic64_t *v) +{ + atomic64_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_xor(s64 i, atomic64_t *v) +{ + atomic64_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_xchg(atomic64_t *v, s64 new) +{ + return atomic64_xchg(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_acquire(atomic64_t *v, s64 new) +{ + return atomic64_xchg_acquire(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_release(atomic64_t *v, s64 new) +{ + return atomic64_xchg_release(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_relaxed(atomic64_t *v, s64 new) +{ + return atomic64_xchg_relaxed(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_acquire(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_release(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_sub_and_test(s64 i, atomic64_t *v) +{ + return atomic64_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic64_dec_and_test(atomic64_t *v) +{ + return atomic64_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_and_test(atomic64_t *v) +{ + return atomic64_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative(s64 i, atomic64_t *v) +{ + return atomic64_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_release(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_inc_not_zero(atomic64_t *v) +{ + return atomic64_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_unless_negative(atomic64_t *v) +{ + return atomic64_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic64_dec_unless_positive(atomic64_t *v) +{ + return atomic64_dec_unless_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_if_positive(atomic64_t *v) +{ + return atomic64_dec_if_positive(v); +} + +#endif /* _RUST_ATOMIC_API_H */ +// e4edb6174dd42a265284958f00a7cea7ddb464b1 diff --git a/rust/helpers.c b/rust/helpers.c index 3abf96f14148..2da644877a29 100644 --- a/rust/helpers.c +++ b/rust/helpers.c @@ -34,6 +34,8 @@ #include #include "helpers.h" +#include "atomic_helpers.h" + __rust_helper __noreturn void rust_helper_BUG(void) { BUG(); diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh index 5b98a8307693..3927aee034c8 100755 --- a/scripts/atomic/gen-atomics.sh +++ b/scripts/atomic/gen-atomics.sh @@ -11,6 +11,7 @@ cat < ${LINUXDIR}/include/${header} diff --git a/scripts/atomic/gen-rust-atomic-helpers.sh b/scripts/atomic/gen-rust-atomic-helpers.sh new file mode 100755 index 000000000000..5ef68017dd89 --- /dev/null +++ b/scripts/atomic/gen-rust-atomic-helpers.sh @@ -0,0 +1,64 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +ATOMICDIR=$(dirname $0) + +. ${ATOMICDIR}/atomic-tbl.sh + +#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, raw, arg...) +gen_proto_order_variant() +{ + local meta="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift + local atomic="$1"; shift + local int="$1"; shift + local raw="$1"; shift + local attrs="${raw:+noinstr }" + + local atomicname="${raw}${atomic}_${pfx}${name}${sfx}${order}" + + local ret="$(gen_ret_type "${meta}" "${int}")" + local params="$(gen_params "${int}" "${atomic}" "$@")" + local args="$(gen_args "$@")" + local retstmt="$(gen_ret_stmt "${meta}")" + +cat < + +EOF + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic" "int" "" ${args} +done + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic64" "s64" "" ${args} +done + +cat < X-Patchwork-Id: 13695636 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D5D59C27C53 for ; Wed, 12 Jun 2024 22:31:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=5VsRacuHxaHfGOlDX8s3JAA9bqkpEvuZhwJFt7GbMfM=; b=44z6hLSHEDfkXXJv9sKZ2pR2UZ 0QVKIRQJSklP8bnQtRPuchXBK6Ciia8QDYhghwVkjD4hQuE+U5oq/NAyRuecKzMe7EnmrVM/P1W2K Wf+9igroYRQFMpdTJm5SpWHDURDAL1+navzMrN8dQhIZmz7hU763kYeJ2yS1BiHAtCNgoaekL/1kw nSKq+rHZQUmuS7TIGVa7LjAzSKSyEM2rvokMhw/UMOX28ZwW1hyeWNJSOMImFhfgOi5oz9yEN59Wr lCYYMUS7o0G4B62m9LV+wmHp7QB2Y6r8LhXkpI2n83cLlwc7xYv+1ssAuRnX67NRYC6eR4tuGrUe0 y7S4hAZg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sHWUw-0000000ELLT-1IZ0; Wed, 12 Jun 2024 22:31:18 +0000 Received: from mail-qt1-x836.google.com ([2607:f8b0:4864:20::836]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sHWUl-0000000ELFp-2YUb for linux-arm-kernel@lists.infradead.org; Wed, 12 Jun 2024 22:31:14 +0000 Received: by mail-qt1-x836.google.com with SMTP id d75a77b69052e-43fbbd1eb0cso1788011cf.2 for ; Wed, 12 Jun 2024 15:31:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718231466; x=1718836266; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=5VsRacuHxaHfGOlDX8s3JAA9bqkpEvuZhwJFt7GbMfM=; b=lsXTYoRZlQR+Q4oTRLldNsNlvR4/mh7iWp9RzmjQG1yKsKf0CEBkaFnljPTzOFWwrp gEBqFBhDf63ozJyxQbfpVAVr6GskLox1ViohXxqfx2PaU3CKy6evOjCYMOGakMEres5V viuUq1PYKxlnJBB95OE1vlCXxvtolg+lArf++UpATkU3XRTe/dhZrZt9fBftV3JdOvOo SOcn1h0c4rr9uX4/yZMARYepTAzHuXF5AYHFrQVj5cWtsNbGsNha8rSVNo8R+Py6mjXY SasN9TJw6+LIKRcU6LJxPzG+1/1Zy826hO4yxrH+OAMQkna8Pzwofo3lJFWo2a9ID3rB YSsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718231466; x=1718836266; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=5VsRacuHxaHfGOlDX8s3JAA9bqkpEvuZhwJFt7GbMfM=; b=Bb+PwUCVhz+wtpH5ejg9XbgiHJFixridSh/YZ+zjfDJBa0rvR2KEoQlk2Rt6jeiZiw P4yo1MDRbAoAYCjeddTpjB+4+kNu+4TfA1mvRv0KwNXgHzRpuf0niz9mfGHYtl/wXqRS c34kcqnP7G9Vf2SLSqfrxvy1C9i31CN/9gQlG5QO+ArVCAW2wcoS89BXS232tbz7urEC yn1amD/009p3J/ANYoCtyytXUjlWMEItoO2HOXpO0MYwkVxXanA8IAE0eA81oNuc4lz9 Sv/Oj7Dk7J1/pG/DfuI35Ew45NfK7xp7Jg5AX/N6oQ5Q802oNe/h4RVkmJpqYygM/6kA ERYQ== X-Forwarded-Encrypted: i=1; AJvYcCXRlkrxMyGvVdgeP+nMFNjss+i1GfsY9Pc17OrCoY9D+yNvyZpc3+JpYnJmU0kdfI5W2aALba9FTN7gCYcmswrzVxbLPCZ3qG20i/bXFaFrCUlA1l4= X-Gm-Message-State: AOJu0YzPtqvvZ/ovUcxHz8+FGUupXzWhWePACyxbRZdN1pVID4qfBT72 A/kyIf1/dczDvbGwtVSyMha8fOFmbygvYbzqpzEXpUKoKlurOT7R X-Google-Smtp-Source: AGHT+IFT8YVmASRU2CX/n9WMK6M2rZfDptOR2YFRYV63OiHIfiZENCJyolY0kKCCBF64rNtPzo98vw== X-Received: by 2002:a05:622a:1809:b0:441:595d:c9e6 with SMTP id d75a77b69052e-4415ac62e4fmr33152541cf.52.1718231464924; Wed, 12 Jun 2024 15:31:04 -0700 (PDT) Received: from fauth1-smtp.messagingengine.com (fauth1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-441ef4f1e26sm536421cf.23.2024.06.12.15.31.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 15:31:04 -0700 (PDT) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailfauth.nyi.internal (Postfix) with ESMTP id 936841200068; Wed, 12 Jun 2024 18:31:03 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Wed, 12 Jun 2024 18:31:03 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfeduhedgudduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveffieeujefhueeigfegueeh geeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe gsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdei gedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfih igmhgvrdhnrghmvg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 12 Jun 2024 18:31:02 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com Subject: [RFC 2/2] rust: sync: Add atomic support Date: Wed, 12 Jun 2024 15:30:25 -0700 Message-ID: <20240612223025.1158537-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240612223025.1158537-1-boqun.feng@gmail.com> References: <20240612223025.1158537-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240612_153107_787842_E7CF5048 X-CRM114-Status: GOOD ( 23.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Provide two atomic types: AtomicI32 and AtomicI64 with the existing implemenation of C atomics. These atomics have the same semantics of the corresponding LKMM C atomics, and using one memory (ordering) model certainly reduces the reasoning difficulty and potential bugs from the interaction of two different memory models. Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect my responsiblity on these Rust APIs. Note that `Atomic*::new()`s are implemented vi open coding on struct atomic*_t. This allows `new()` being a `const` function, so that it can be used in constant contexts. Signed-off-by: Boqun Feng --- MAINTAINERS | 4 +- arch/arm64/kernel/cpufeature.c | 2 + rust/kernel/sync.rs | 1 + rust/kernel/sync/atomic.rs | 63 ++ rust/kernel/sync/atomic/impl.rs | 1375 +++++++++++++++++++++++++++++ scripts/atomic/gen-atomics.sh | 1 + scripts/atomic/gen-rust-atomic.sh | 136 +++ 7 files changed, 1581 insertions(+), 1 deletion(-) create mode 100644 rust/kernel/sync/atomic.rs create mode 100644 rust/kernel/sync/atomic/impl.rs create mode 100755 scripts/atomic/gen-rust-atomic.sh diff --git a/MAINTAINERS b/MAINTAINERS index d6c90161c7bf..a8528d27b260 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3458,7 +3458,7 @@ F: drivers/input/touchscreen/atmel_mxt_ts.c ATOMIC INFRASTRUCTURE M: Will Deacon M: Peter Zijlstra -R: Boqun Feng +M: Boqun Feng R: Mark Rutland L: linux-kernel@vger.kernel.org S: Maintained @@ -3467,6 +3467,8 @@ F: arch/*/include/asm/atomic*.h F: include/*/atomic*.h F: include/linux/refcount.h F: scripts/atomic/ +F: rust/kernel/sync/atomic.rs +F: rust/kernel/sync/atomic/ ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER M: Bradley Grove diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 48e7029f1054..99e6e2b2867f 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1601,6 +1601,8 @@ static bool has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope) { u64 val = read_scoped_sysreg(entry, scope); + if (entry->capability == ARM64_HAS_LSE_ATOMICS) + return false; return feature_matches(val, entry); } diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index 0ab20975a3b5..66ac3752ca71 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -8,6 +8,7 @@ use crate::types::Opaque; mod arc; +pub mod atomic; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs new file mode 100644 index 000000000000..b0f852cf1741 --- /dev/null +++ b/rust/kernel/sync/atomic.rs @@ -0,0 +1,63 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic primitives. +//! +//! These primitives have the same semantics as their C counterparts, for precise definitions of +//! the semantics, please refer to tools/memory-model. Note that Linux Kernel Memory (Consistency) +//! Model is the only model for Rust development in kernel right now, please avoid to use Rust's +//! own atomics. + +use crate::bindings::{atomic64_t, atomic_t}; +use crate::types::Opaque; + +mod r#impl; + +/// Atomic 32bit signed integers. +pub struct AtomicI32(Opaque); + +/// Atomic 64bit signed integers. +pub struct AtomicI64(Opaque); + +impl AtomicI32 { + /// Creates an atomic variable with initial value `v`. + /// + /// # Examples + /// + /// ```rust + /// # use kernel::sync::atomic::*; + /// + /// let x = AtomicI32::new(0); + /// + /// assert_eq!(x.read(), 0); + /// assert_eq!(x.fetch_add_relaxed(12), 0); + /// assert_eq!(x.read(), 12); + /// ``` + pub const fn new(v: i32) -> Self { + AtomicI32(Opaque::new(atomic_t { counter: v })) + } +} + +// SAFETY: `AtomicI32` is safe to share among execution contexts because all accesses are atomic. +unsafe impl Sync for AtomicI32 {} + +impl AtomicI64 { + /// Creates an atomic variable with initial value `v`. + /// + /// # Examples + /// + /// ```rust + /// # use kernel::sync::atomic::*; + /// + /// let x = AtomicI64::new(0); + /// + /// assert_eq!(x.read(), 0); + /// assert_eq!(x.fetch_add_relaxed(12), 0); + /// assert_eq!(x.read(), 12); + /// ``` + pub const fn new(v: i64) -> Self { + AtomicI64(Opaque::new(atomic64_t { counter: v })) + } +} + +// SAFETY: `AtomicI64` is safe to share among execution contexts because all accesses are atomic. +unsafe impl Sync for AtomicI64 {} diff --git a/rust/kernel/sync/atomic/impl.rs b/rust/kernel/sync/atomic/impl.rs new file mode 100644 index 000000000000..1bbb0a714834 --- /dev/null +++ b/rust/kernel/sync/atomic/impl.rs @@ -0,0 +1,1375 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Generated by scripts/atomic/gen-rust-atomic.sh +//! DO NOT MODIFY THIS FILE DIRECTLY + +use super::*; +use crate::bindings::*; + +impl AtomicI32 { + /// See `atomic_read`. + #[inline(always)] + pub fn read(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_read(self.0.get()); + } + } + /// See `atomic_read_acquire`. + #[inline(always)] + pub fn read_acquire(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_read_acquire(self.0.get()); + } + } + /// See `atomic_set`. + #[inline(always)] + pub fn set(&self, i: i32) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic_set(self.0.get(), i); + } + } + /// See `atomic_set_release`. + #[inline(always)] + pub fn set_release(&self, i: i32) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic_set_release(self.0.get(), i); + } + } + /// See `atomic_add`. + #[inline(always)] + pub fn add(&self, i: i32) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic_add(i, self.0.get()); + } + } + /// See `atomic_add_return`. + #[inline(always)] + pub fn add_return(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_add_return(i, self.0.get()); + } + } + /// See `atomic_add_return_acquire`. + #[inline(always)] + pub fn add_return_acquire(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_add_return_acquire(i, self.0.get()); + } + } + /// See `atomic_add_return_release`. + #[inline(always)] + pub fn add_return_release(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_add_return_release(i, self.0.get()); + } + } + /// See `atomic_add_return_relaxed`. + #[inline(always)] + pub fn add_return_relaxed(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_add_return_relaxed(i, self.0.get()); + } + } + /// See `atomic_fetch_add`. + #[inline(always)] + pub fn fetch_add(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_add(i, self.0.get()); + } + } + /// See `atomic_fetch_add_acquire`. + #[inline(always)] + pub fn fetch_add_acquire(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_add_acquire(i, self.0.get()); + } + } + /// See `atomic_fetch_add_release`. + #[inline(always)] + pub fn fetch_add_release(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_add_release(i, self.0.get()); + } + } + /// See `atomic_fetch_add_relaxed`. + #[inline(always)] + pub fn fetch_add_relaxed(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_add_relaxed(i, self.0.get()); + } + } + /// See `atomic_sub`. + #[inline(always)] + pub fn sub(&self, i: i32) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic_sub(i, self.0.get()); + } + } + /// See `atomic_sub_return`. + #[inline(always)] + pub fn sub_return(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_sub_return(i, self.0.get()); + } + } + /// See `atomic_sub_return_acquire`. + #[inline(always)] + pub fn sub_return_acquire(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_sub_return_acquire(i, self.0.get()); + } + } + /// See `atomic_sub_return_release`. + #[inline(always)] + pub fn sub_return_release(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_sub_return_release(i, self.0.get()); + } + } + /// See `atomic_sub_return_relaxed`. + #[inline(always)] + pub fn sub_return_relaxed(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_sub_return_relaxed(i, self.0.get()); + } + } + /// See `atomic_fetch_sub`. + #[inline(always)] + pub fn fetch_sub(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_sub(i, self.0.get()); + } + } + /// See `atomic_fetch_sub_acquire`. + #[inline(always)] + pub fn fetch_sub_acquire(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_sub_acquire(i, self.0.get()); + } + } + /// See `atomic_fetch_sub_release`. + #[inline(always)] + pub fn fetch_sub_release(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_sub_release(i, self.0.get()); + } + } + /// See `atomic_fetch_sub_relaxed`. + #[inline(always)] + pub fn fetch_sub_relaxed(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_sub_relaxed(i, self.0.get()); + } + } + /// See `atomic_inc`. + #[inline(always)] + pub fn inc(&self) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic_inc(self.0.get()); + } + } + /// See `atomic_inc_return`. + #[inline(always)] + pub fn inc_return(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_inc_return(self.0.get()); + } + } + /// See `atomic_inc_return_acquire`. + #[inline(always)] + pub fn inc_return_acquire(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_inc_return_acquire(self.0.get()); + } + } + /// See `atomic_inc_return_release`. + #[inline(always)] + pub fn inc_return_release(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_inc_return_release(self.0.get()); + } + } + /// See `atomic_inc_return_relaxed`. + #[inline(always)] + pub fn inc_return_relaxed(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_inc_return_relaxed(self.0.get()); + } + } + /// See `atomic_fetch_inc`. + #[inline(always)] + pub fn fetch_inc(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_inc(self.0.get()); + } + } + /// See `atomic_fetch_inc_acquire`. + #[inline(always)] + pub fn fetch_inc_acquire(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_inc_acquire(self.0.get()); + } + } + /// See `atomic_fetch_inc_release`. + #[inline(always)] + pub fn fetch_inc_release(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_inc_release(self.0.get()); + } + } + /// See `atomic_fetch_inc_relaxed`. + #[inline(always)] + pub fn fetch_inc_relaxed(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_inc_relaxed(self.0.get()); + } + } + /// See `atomic_dec`. + #[inline(always)] + pub fn dec(&self) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic_dec(self.0.get()); + } + } + /// See `atomic_dec_return`. + #[inline(always)] + pub fn dec_return(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_dec_return(self.0.get()); + } + } + /// See `atomic_dec_return_acquire`. + #[inline(always)] + pub fn dec_return_acquire(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_dec_return_acquire(self.0.get()); + } + } + /// See `atomic_dec_return_release`. + #[inline(always)] + pub fn dec_return_release(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_dec_return_release(self.0.get()); + } + } + /// See `atomic_dec_return_relaxed`. + #[inline(always)] + pub fn dec_return_relaxed(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_dec_return_relaxed(self.0.get()); + } + } + /// See `atomic_fetch_dec`. + #[inline(always)] + pub fn fetch_dec(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_dec(self.0.get()); + } + } + /// See `atomic_fetch_dec_acquire`. + #[inline(always)] + pub fn fetch_dec_acquire(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_dec_acquire(self.0.get()); + } + } + /// See `atomic_fetch_dec_release`. + #[inline(always)] + pub fn fetch_dec_release(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_dec_release(self.0.get()); + } + } + /// See `atomic_fetch_dec_relaxed`. + #[inline(always)] + pub fn fetch_dec_relaxed(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_dec_relaxed(self.0.get()); + } + } + /// See `atomic_and`. + #[inline(always)] + pub fn and(&self, i: i32) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic_and(i, self.0.get()); + } + } + /// See `atomic_fetch_and`. + #[inline(always)] + pub fn fetch_and(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_and(i, self.0.get()); + } + } + /// See `atomic_fetch_and_acquire`. + #[inline(always)] + pub fn fetch_and_acquire(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_and_acquire(i, self.0.get()); + } + } + /// See `atomic_fetch_and_release`. + #[inline(always)] + pub fn fetch_and_release(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_and_release(i, self.0.get()); + } + } + /// See `atomic_fetch_and_relaxed`. + #[inline(always)] + pub fn fetch_and_relaxed(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_and_relaxed(i, self.0.get()); + } + } + /// See `atomic_andnot`. + #[inline(always)] + pub fn andnot(&self, i: i32) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic_andnot(i, self.0.get()); + } + } + /// See `atomic_fetch_andnot`. + #[inline(always)] + pub fn fetch_andnot(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_andnot(i, self.0.get()); + } + } + /// See `atomic_fetch_andnot_acquire`. + #[inline(always)] + pub fn fetch_andnot_acquire(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_andnot_acquire(i, self.0.get()); + } + } + /// See `atomic_fetch_andnot_release`. + #[inline(always)] + pub fn fetch_andnot_release(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_andnot_release(i, self.0.get()); + } + } + /// See `atomic_fetch_andnot_relaxed`. + #[inline(always)] + pub fn fetch_andnot_relaxed(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_andnot_relaxed(i, self.0.get()); + } + } + /// See `atomic_or`. + #[inline(always)] + pub fn or(&self, i: i32) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic_or(i, self.0.get()); + } + } + /// See `atomic_fetch_or`. + #[inline(always)] + pub fn fetch_or(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_or(i, self.0.get()); + } + } + /// See `atomic_fetch_or_acquire`. + #[inline(always)] + pub fn fetch_or_acquire(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_or_acquire(i, self.0.get()); + } + } + /// See `atomic_fetch_or_release`. + #[inline(always)] + pub fn fetch_or_release(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_or_release(i, self.0.get()); + } + } + /// See `atomic_fetch_or_relaxed`. + #[inline(always)] + pub fn fetch_or_relaxed(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_or_relaxed(i, self.0.get()); + } + } + /// See `atomic_xor`. + #[inline(always)] + pub fn xor(&self, i: i32) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic_xor(i, self.0.get()); + } + } + /// See `atomic_fetch_xor`. + #[inline(always)] + pub fn fetch_xor(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_xor(i, self.0.get()); + } + } + /// See `atomic_fetch_xor_acquire`. + #[inline(always)] + pub fn fetch_xor_acquire(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_xor_acquire(i, self.0.get()); + } + } + /// See `atomic_fetch_xor_release`. + #[inline(always)] + pub fn fetch_xor_release(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_xor_release(i, self.0.get()); + } + } + /// See `atomic_fetch_xor_relaxed`. + #[inline(always)] + pub fn fetch_xor_relaxed(&self, i: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_xor_relaxed(i, self.0.get()); + } + } + /// See `atomic_xchg`. + #[inline(always)] + pub fn xchg(&self, new: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_xchg(self.0.get(), new); + } + } + /// See `atomic_xchg_acquire`. + #[inline(always)] + pub fn xchg_acquire(&self, new: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_xchg_acquire(self.0.get(), new); + } + } + /// See `atomic_xchg_release`. + #[inline(always)] + pub fn xchg_release(&self, new: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_xchg_release(self.0.get(), new); + } + } + /// See `atomic_xchg_relaxed`. + #[inline(always)] + pub fn xchg_relaxed(&self, new: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_xchg_relaxed(self.0.get(), new); + } + } + /// See `atomic_cmpxchg`. + #[inline(always)] + pub fn cmpxchg(&self, old: i32, new: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_cmpxchg(self.0.get(), old, new); + } + } + /// See `atomic_cmpxchg_acquire`. + #[inline(always)] + pub fn cmpxchg_acquire(&self, old: i32, new: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_cmpxchg_acquire(self.0.get(), old, new); + } + } + /// See `atomic_cmpxchg_release`. + #[inline(always)] + pub fn cmpxchg_release(&self, old: i32, new: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_cmpxchg_release(self.0.get(), old, new); + } + } + /// See `atomic_cmpxchg_relaxed`. + #[inline(always)] + pub fn cmpxchg_relaxed(&self, old: i32, new: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_cmpxchg_relaxed(self.0.get(), old, new); + } + } + /// See `atomic_try_cmpxchg`. + #[inline(always)] + pub fn try_cmpxchg(&self, old: &mut i32, new: i32) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_try_cmpxchg(self.0.get(), old, new); + } + } + /// See `atomic_try_cmpxchg_acquire`. + #[inline(always)] + pub fn try_cmpxchg_acquire(&self, old: &mut i32, new: i32) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_try_cmpxchg_acquire(self.0.get(), old, new); + } + } + /// See `atomic_try_cmpxchg_release`. + #[inline(always)] + pub fn try_cmpxchg_release(&self, old: &mut i32, new: i32) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_try_cmpxchg_release(self.0.get(), old, new); + } + } + /// See `atomic_try_cmpxchg_relaxed`. + #[inline(always)] + pub fn try_cmpxchg_relaxed(&self, old: &mut i32, new: i32) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_try_cmpxchg_relaxed(self.0.get(), old, new); + } + } + /// See `atomic_sub_and_test`. + #[inline(always)] + pub fn sub_and_test(&self, i: i32) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_sub_and_test(i, self.0.get()); + } + } + /// See `atomic_dec_and_test`. + #[inline(always)] + pub fn dec_and_test(&self) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_dec_and_test(self.0.get()); + } + } + /// See `atomic_inc_and_test`. + #[inline(always)] + pub fn inc_and_test(&self) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_inc_and_test(self.0.get()); + } + } + /// See `atomic_add_negative`. + #[inline(always)] + pub fn add_negative(&self, i: i32) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_add_negative(i, self.0.get()); + } + } + /// See `atomic_add_negative_acquire`. + #[inline(always)] + pub fn add_negative_acquire(&self, i: i32) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_add_negative_acquire(i, self.0.get()); + } + } + /// See `atomic_add_negative_release`. + #[inline(always)] + pub fn add_negative_release(&self, i: i32) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_add_negative_release(i, self.0.get()); + } + } + /// See `atomic_add_negative_relaxed`. + #[inline(always)] + pub fn add_negative_relaxed(&self, i: i32) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_add_negative_relaxed(i, self.0.get()); + } + } + /// See `atomic_fetch_add_unless`. + #[inline(always)] + pub fn fetch_add_unless(&self, a: i32, u: i32) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_fetch_add_unless(self.0.get(), a, u); + } + } + /// See `atomic_add_unless`. + #[inline(always)] + pub fn add_unless(&self, a: i32, u: i32) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_add_unless(self.0.get(), a, u); + } + } + /// See `atomic_inc_not_zero`. + #[inline(always)] + pub fn inc_not_zero(&self) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_inc_not_zero(self.0.get()); + } + } + /// See `atomic_inc_unless_negative`. + #[inline(always)] + pub fn inc_unless_negative(&self) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_inc_unless_negative(self.0.get()); + } + } + /// See `atomic_dec_unless_positive`. + #[inline(always)] + pub fn dec_unless_positive(&self) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_dec_unless_positive(self.0.get()); + } + } + /// See `atomic_dec_if_positive`. + #[inline(always)] + pub fn dec_if_positive(&self) -> i32 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic_dec_if_positive(self.0.get()); + } + } +} + +impl AtomicI64 { + /// See `atomic64_read`. + #[inline(always)] + pub fn read(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_read(self.0.get()); + } + } + /// See `atomic64_read_acquire`. + #[inline(always)] + pub fn read_acquire(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_read_acquire(self.0.get()); + } + } + /// See `atomic64_set`. + #[inline(always)] + pub fn set(&self, i: i64) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic64_set(self.0.get(), i); + } + } + /// See `atomic64_set_release`. + #[inline(always)] + pub fn set_release(&self, i: i64) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic64_set_release(self.0.get(), i); + } + } + /// See `atomic64_add`. + #[inline(always)] + pub fn add(&self, i: i64) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic64_add(i, self.0.get()); + } + } + /// See `atomic64_add_return`. + #[inline(always)] + pub fn add_return(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_add_return(i, self.0.get()); + } + } + /// See `atomic64_add_return_acquire`. + #[inline(always)] + pub fn add_return_acquire(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_add_return_acquire(i, self.0.get()); + } + } + /// See `atomic64_add_return_release`. + #[inline(always)] + pub fn add_return_release(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_add_return_release(i, self.0.get()); + } + } + /// See `atomic64_add_return_relaxed`. + #[inline(always)] + pub fn add_return_relaxed(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_add_return_relaxed(i, self.0.get()); + } + } + /// See `atomic64_fetch_add`. + #[inline(always)] + pub fn fetch_add(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_add(i, self.0.get()); + } + } + /// See `atomic64_fetch_add_acquire`. + #[inline(always)] + pub fn fetch_add_acquire(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_add_acquire(i, self.0.get()); + } + } + /// See `atomic64_fetch_add_release`. + #[inline(always)] + pub fn fetch_add_release(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_add_release(i, self.0.get()); + } + } + /// See `atomic64_fetch_add_relaxed`. + #[inline(always)] + pub fn fetch_add_relaxed(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_add_relaxed(i, self.0.get()); + } + } + /// See `atomic64_sub`. + #[inline(always)] + pub fn sub(&self, i: i64) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic64_sub(i, self.0.get()); + } + } + /// See `atomic64_sub_return`. + #[inline(always)] + pub fn sub_return(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_sub_return(i, self.0.get()); + } + } + /// See `atomic64_sub_return_acquire`. + #[inline(always)] + pub fn sub_return_acquire(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_sub_return_acquire(i, self.0.get()); + } + } + /// See `atomic64_sub_return_release`. + #[inline(always)] + pub fn sub_return_release(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_sub_return_release(i, self.0.get()); + } + } + /// See `atomic64_sub_return_relaxed`. + #[inline(always)] + pub fn sub_return_relaxed(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_sub_return_relaxed(i, self.0.get()); + } + } + /// See `atomic64_fetch_sub`. + #[inline(always)] + pub fn fetch_sub(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_sub(i, self.0.get()); + } + } + /// See `atomic64_fetch_sub_acquire`. + #[inline(always)] + pub fn fetch_sub_acquire(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_sub_acquire(i, self.0.get()); + } + } + /// See `atomic64_fetch_sub_release`. + #[inline(always)] + pub fn fetch_sub_release(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_sub_release(i, self.0.get()); + } + } + /// See `atomic64_fetch_sub_relaxed`. + #[inline(always)] + pub fn fetch_sub_relaxed(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_sub_relaxed(i, self.0.get()); + } + } + /// See `atomic64_inc`. + #[inline(always)] + pub fn inc(&self) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic64_inc(self.0.get()); + } + } + /// See `atomic64_inc_return`. + #[inline(always)] + pub fn inc_return(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_inc_return(self.0.get()); + } + } + /// See `atomic64_inc_return_acquire`. + #[inline(always)] + pub fn inc_return_acquire(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_inc_return_acquire(self.0.get()); + } + } + /// See `atomic64_inc_return_release`. + #[inline(always)] + pub fn inc_return_release(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_inc_return_release(self.0.get()); + } + } + /// See `atomic64_inc_return_relaxed`. + #[inline(always)] + pub fn inc_return_relaxed(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_inc_return_relaxed(self.0.get()); + } + } + /// See `atomic64_fetch_inc`. + #[inline(always)] + pub fn fetch_inc(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_inc(self.0.get()); + } + } + /// See `atomic64_fetch_inc_acquire`. + #[inline(always)] + pub fn fetch_inc_acquire(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_inc_acquire(self.0.get()); + } + } + /// See `atomic64_fetch_inc_release`. + #[inline(always)] + pub fn fetch_inc_release(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_inc_release(self.0.get()); + } + } + /// See `atomic64_fetch_inc_relaxed`. + #[inline(always)] + pub fn fetch_inc_relaxed(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_inc_relaxed(self.0.get()); + } + } + /// See `atomic64_dec`. + #[inline(always)] + pub fn dec(&self) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic64_dec(self.0.get()); + } + } + /// See `atomic64_dec_return`. + #[inline(always)] + pub fn dec_return(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_dec_return(self.0.get()); + } + } + /// See `atomic64_dec_return_acquire`. + #[inline(always)] + pub fn dec_return_acquire(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_dec_return_acquire(self.0.get()); + } + } + /// See `atomic64_dec_return_release`. + #[inline(always)] + pub fn dec_return_release(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_dec_return_release(self.0.get()); + } + } + /// See `atomic64_dec_return_relaxed`. + #[inline(always)] + pub fn dec_return_relaxed(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_dec_return_relaxed(self.0.get()); + } + } + /// See `atomic64_fetch_dec`. + #[inline(always)] + pub fn fetch_dec(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_dec(self.0.get()); + } + } + /// See `atomic64_fetch_dec_acquire`. + #[inline(always)] + pub fn fetch_dec_acquire(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_dec_acquire(self.0.get()); + } + } + /// See `atomic64_fetch_dec_release`. + #[inline(always)] + pub fn fetch_dec_release(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_dec_release(self.0.get()); + } + } + /// See `atomic64_fetch_dec_relaxed`. + #[inline(always)] + pub fn fetch_dec_relaxed(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_dec_relaxed(self.0.get()); + } + } + /// See `atomic64_and`. + #[inline(always)] + pub fn and(&self, i: i64) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic64_and(i, self.0.get()); + } + } + /// See `atomic64_fetch_and`. + #[inline(always)] + pub fn fetch_and(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_and(i, self.0.get()); + } + } + /// See `atomic64_fetch_and_acquire`. + #[inline(always)] + pub fn fetch_and_acquire(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_and_acquire(i, self.0.get()); + } + } + /// See `atomic64_fetch_and_release`. + #[inline(always)] + pub fn fetch_and_release(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_and_release(i, self.0.get()); + } + } + /// See `atomic64_fetch_and_relaxed`. + #[inline(always)] + pub fn fetch_and_relaxed(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_and_relaxed(i, self.0.get()); + } + } + /// See `atomic64_andnot`. + #[inline(always)] + pub fn andnot(&self, i: i64) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic64_andnot(i, self.0.get()); + } + } + /// See `atomic64_fetch_andnot`. + #[inline(always)] + pub fn fetch_andnot(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_andnot(i, self.0.get()); + } + } + /// See `atomic64_fetch_andnot_acquire`. + #[inline(always)] + pub fn fetch_andnot_acquire(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_andnot_acquire(i, self.0.get()); + } + } + /// See `atomic64_fetch_andnot_release`. + #[inline(always)] + pub fn fetch_andnot_release(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_andnot_release(i, self.0.get()); + } + } + /// See `atomic64_fetch_andnot_relaxed`. + #[inline(always)] + pub fn fetch_andnot_relaxed(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_andnot_relaxed(i, self.0.get()); + } + } + /// See `atomic64_or`. + #[inline(always)] + pub fn or(&self, i: i64) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic64_or(i, self.0.get()); + } + } + /// See `atomic64_fetch_or`. + #[inline(always)] + pub fn fetch_or(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_or(i, self.0.get()); + } + } + /// See `atomic64_fetch_or_acquire`. + #[inline(always)] + pub fn fetch_or_acquire(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_or_acquire(i, self.0.get()); + } + } + /// See `atomic64_fetch_or_release`. + #[inline(always)] + pub fn fetch_or_release(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_or_release(i, self.0.get()); + } + } + /// See `atomic64_fetch_or_relaxed`. + #[inline(always)] + pub fn fetch_or_relaxed(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_or_relaxed(i, self.0.get()); + } + } + /// See `atomic64_xor`. + #[inline(always)] + pub fn xor(&self, i: i64) { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + atomic64_xor(i, self.0.get()); + } + } + /// See `atomic64_fetch_xor`. + #[inline(always)] + pub fn fetch_xor(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_xor(i, self.0.get()); + } + } + /// See `atomic64_fetch_xor_acquire`. + #[inline(always)] + pub fn fetch_xor_acquire(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_xor_acquire(i, self.0.get()); + } + } + /// See `atomic64_fetch_xor_release`. + #[inline(always)] + pub fn fetch_xor_release(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_xor_release(i, self.0.get()); + } + } + /// See `atomic64_fetch_xor_relaxed`. + #[inline(always)] + pub fn fetch_xor_relaxed(&self, i: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_xor_relaxed(i, self.0.get()); + } + } + /// See `atomic64_xchg`. + #[inline(always)] + pub fn xchg(&self, new: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_xchg(self.0.get(), new); + } + } + /// See `atomic64_xchg_acquire`. + #[inline(always)] + pub fn xchg_acquire(&self, new: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_xchg_acquire(self.0.get(), new); + } + } + /// See `atomic64_xchg_release`. + #[inline(always)] + pub fn xchg_release(&self, new: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_xchg_release(self.0.get(), new); + } + } + /// See `atomic64_xchg_relaxed`. + #[inline(always)] + pub fn xchg_relaxed(&self, new: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_xchg_relaxed(self.0.get(), new); + } + } + /// See `atomic64_cmpxchg`. + #[inline(always)] + pub fn cmpxchg(&self, old: i64, new: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_cmpxchg(self.0.get(), old, new); + } + } + /// See `atomic64_cmpxchg_acquire`. + #[inline(always)] + pub fn cmpxchg_acquire(&self, old: i64, new: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_cmpxchg_acquire(self.0.get(), old, new); + } + } + /// See `atomic64_cmpxchg_release`. + #[inline(always)] + pub fn cmpxchg_release(&self, old: i64, new: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_cmpxchg_release(self.0.get(), old, new); + } + } + /// See `atomic64_cmpxchg_relaxed`. + #[inline(always)] + pub fn cmpxchg_relaxed(&self, old: i64, new: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_cmpxchg_relaxed(self.0.get(), old, new); + } + } + /// See `atomic64_try_cmpxchg`. + #[inline(always)] + pub fn try_cmpxchg(&self, old: &mut i64, new: i64) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_try_cmpxchg(self.0.get(), old, new); + } + } + /// See `atomic64_try_cmpxchg_acquire`. + #[inline(always)] + pub fn try_cmpxchg_acquire(&self, old: &mut i64, new: i64) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_try_cmpxchg_acquire(self.0.get(), old, new); + } + } + /// See `atomic64_try_cmpxchg_release`. + #[inline(always)] + pub fn try_cmpxchg_release(&self, old: &mut i64, new: i64) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_try_cmpxchg_release(self.0.get(), old, new); + } + } + /// See `atomic64_try_cmpxchg_relaxed`. + #[inline(always)] + pub fn try_cmpxchg_relaxed(&self, old: &mut i64, new: i64) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_try_cmpxchg_relaxed(self.0.get(), old, new); + } + } + /// See `atomic64_sub_and_test`. + #[inline(always)] + pub fn sub_and_test(&self, i: i64) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_sub_and_test(i, self.0.get()); + } + } + /// See `atomic64_dec_and_test`. + #[inline(always)] + pub fn dec_and_test(&self) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_dec_and_test(self.0.get()); + } + } + /// See `atomic64_inc_and_test`. + #[inline(always)] + pub fn inc_and_test(&self) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_inc_and_test(self.0.get()); + } + } + /// See `atomic64_add_negative`. + #[inline(always)] + pub fn add_negative(&self, i: i64) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_add_negative(i, self.0.get()); + } + } + /// See `atomic64_add_negative_acquire`. + #[inline(always)] + pub fn add_negative_acquire(&self, i: i64) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_add_negative_acquire(i, self.0.get()); + } + } + /// See `atomic64_add_negative_release`. + #[inline(always)] + pub fn add_negative_release(&self, i: i64) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_add_negative_release(i, self.0.get()); + } + } + /// See `atomic64_add_negative_relaxed`. + #[inline(always)] + pub fn add_negative_relaxed(&self, i: i64) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_add_negative_relaxed(i, self.0.get()); + } + } + /// See `atomic64_fetch_add_unless`. + #[inline(always)] + pub fn fetch_add_unless(&self, a: i64, u: i64) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_fetch_add_unless(self.0.get(), a, u); + } + } + /// See `atomic64_add_unless`. + #[inline(always)] + pub fn add_unless(&self, a: i64, u: i64) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_add_unless(self.0.get(), a, u); + } + } + /// See `atomic64_inc_not_zero`. + #[inline(always)] + pub fn inc_not_zero(&self) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_inc_not_zero(self.0.get()); + } + } + /// See `atomic64_inc_unless_negative`. + #[inline(always)] + pub fn inc_unless_negative(&self) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_inc_unless_negative(self.0.get()); + } + } + /// See `atomic64_dec_unless_positive`. + #[inline(always)] + pub fn dec_unless_positive(&self) -> bool { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_dec_unless_positive(self.0.get()); + } + } + /// See `atomic64_dec_if_positive`. + #[inline(always)] + pub fn dec_if_positive(&self) -> i64 { + // SAFETY:`self.0.get()` is a valid pointer. + unsafe { + return atomic64_dec_if_positive(self.0.get()); + } + } +} + +// 258c6b7d580a83146e21973b1068cc92d9e65b87 diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh index 3927aee034c8..8d250c885c24 100755 --- a/scripts/atomic/gen-atomics.sh +++ b/scripts/atomic/gen-atomics.sh @@ -12,6 +12,7 @@ gen-atomic-instrumented.sh linux/atomic/atomic-instrumented.h gen-atomic-long.sh linux/atomic/atomic-long.h gen-atomic-fallback.sh linux/atomic/atomic-arch-fallback.h gen-rust-atomic-helpers.sh ../rust/atomic_helpers.h +gen-rust-atomic.sh ../rust/kernel/sync/atomic/impl.rs EOF while read script header args; do /bin/sh ${ATOMICDIR}/${script} ${ATOMICTBL} ${args} > ${LINUXDIR}/include/${header} diff --git a/scripts/atomic/gen-rust-atomic.sh b/scripts/atomic/gen-rust-atomic.sh new file mode 100755 index 000000000000..491c643301a9 --- /dev/null +++ b/scripts/atomic/gen-rust-atomic.sh @@ -0,0 +1,136 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +ATOMICDIR=$(dirname $0) + +. ${ATOMICDIR}/atomic-tbl.sh + +#gen_ret_type(meta, int) +gen_ret_type() { + local meta="$1"; shift + local int="$1"; shift + + case "${meta}" in + [sv]) printf "";; + [bB]) printf -- "-> bool ";; + [aiIfFlR]) printf -- "-> ${int} ";; + esac +} + +# gen_param_type(arg, int) +gen_param_type() +{ + local type="${1%%:*}"; shift + local int="$1"; shift + + case "${type}" in + i) type="${int}";; + p) type="&mut ${int}";; + esac + + printf "${type}" +} + +#gen_param(arg, int) +gen_param() +{ + local arg="$1"; shift + local int="$1"; shift + local name="$(gen_param_name "${arg}")" + local type="$(gen_param_type "${arg}" "${int}")" + + printf "${name}: ${type}" +} + +#gen_params(int, arg...) +gen_params() +{ + local int="$1"; shift + + while [ "$#" -gt 0 ]; do + if [[ "$1" != "v" && "$1" != "cv" ]]; then + printf ", " + gen_param "$1" "${int}" + fi + shift; + done +} + +#gen_args(arg...) +gen_args() +{ + while [ "$#" -gt 0 ]; do + if [[ "$1" == "v" || "$1" == "cv" ]]; then + printf "self.0.get()" + [ "$#" -gt 1 ] && printf ", " + else + printf "$(gen_param_name "$1")" + [ "$#" -gt 1 ] && printf ", " + fi + shift; + done +} + +#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, ty, int, raw, arg...) +gen_proto_order_variant() +{ + local meta="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift + local atomic="$1"; shift + local ty="$1"; shift + local int="$1"; shift + local raw="$1"; shift + + local fn_name="${raw}${pfx}${name}${sfx}${order}" + local atomicname="${raw}${atomic}_${pfx}${name}${sfx}${order}" + + local ret="$(gen_ret_type "${meta}" "${int}")" + local params="$(gen_params "${int}" $@)" + local args="$(gen_args "$@")" + local retstmt="$(gen_ret_stmt "${meta}")" + +cat <