From patchwork Wed Jan 24 11:20:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alice Ryhl X-Patchwork-Id: 13528937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 427B4C46CD2 for ; Wed, 24 Jan 2024 11:20:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BF54D6B0080; Wed, 24 Jan 2024 06:20:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B7F386B0082; Wed, 24 Jan 2024 06:20:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 933AE6B0083; Wed, 24 Jan 2024 06:20:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7CB4C6B0080 for ; Wed, 24 Jan 2024 06:20:54 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5F6F816064B for ; Wed, 24 Jan 2024 11:20:54 +0000 (UTC) X-FDA: 81713962428.28.A764AEA Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf07.hostedemail.com (Postfix) with ESMTP id 8570040011 for ; Wed, 24 Jan 2024 11:20:52 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ao3ZXVNv; spf=pass (imf07.hostedemail.com: domain of 3k_KwZQkKCHAOZWQSfmVZUccUZS.QcaZWbil-aaYjOQY.cfU@flex--aliceryhl.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3k_KwZQkKCHAOZWQSfmVZUccUZS.QcaZWbil-aaYjOQY.cfU@flex--aliceryhl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706095252; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yeYmyXZKazzZuN2hgbJgkgGVYq/XVAwngt0nzNHk5M8=; b=0X3gRLAZk6n9RKY3mwJ+DXx2O6yoFicFRwa2rYdVkwlRz+a4lFQsS2VEkcoKAHXkematMB 9rKsWsPuMJziwEqoneiRsj5c+tp2pohOjqO1f+XNhqNd8EWuZIzkDmFRbVUzt/8v+7aUNc 51KuNrP6BbmF6O5h5OjHZ2ZYMujHuug= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ao3ZXVNv; spf=pass (imf07.hostedemail.com: domain of 3k_KwZQkKCHAOZWQSfmVZUccUZS.QcaZWbil-aaYjOQY.cfU@flex--aliceryhl.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3k_KwZQkKCHAOZWQSfmVZUccUZS.QcaZWbil-aaYjOQY.cfU@flex--aliceryhl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706095252; a=rsa-sha256; cv=none; b=z3rKyYpucxLX+ewscfOTqEGGDFZfy6i4TUSSQnQTodp53xuYzz+CJjyQPGNteUMuFd7IAP MSl/Cr8PO5wx6/H4xSzbFJFyIsokOGjsajrledYCkX/1ptMwTBMtgtZBxZYOHVJ2f5eLD3 1QYO7x1Za1pkTtXgH+THiHKhROLw7lM= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5f53b4554b6so79549047b3.3 for ; Wed, 24 Jan 2024 03:20:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706095251; x=1706700051; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yeYmyXZKazzZuN2hgbJgkgGVYq/XVAwngt0nzNHk5M8=; b=ao3ZXVNvJ+B3h7ZvZHzM5XfWIcEJEQlpvJ9VWCK6PRLcsyGshJQp7qRGcE6zPI7PV6 yBrEIXbol3cu2pH2o4norbtwn5vVhyVkGefgh/WlUcyqGXIXlSs1cvOadBLJTBH6rwDU QSSIciI5ORb7n21nz8vMBTosSjc21LTL4jW5U3WQPTvOw0EwnRsPQxT/bfxsNEltObix 0/AnjiIVJiBQWBrRs0Kl0POtsZmkkNEUkU3NluGHcROKCSVVoy5yHXnRh/V99xHEGlCt fNn9rQhRQ60/PdWOBMc52yjrrvwpQLXQ6THxNgA/GC/9iibSANkK7hwa6mSRIw+NeVTE B6QQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706095251; x=1706700051; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yeYmyXZKazzZuN2hgbJgkgGVYq/XVAwngt0nzNHk5M8=; b=mUssTk3ixpo3BzXNPi+0hRI4qFTXfNhP11851hv6YTrq6dhTEw88wvbCvWKsVf6Imz +P8e/GHwUdE9/hgswSb0blDNJcoTvEMX6R8HPqGH/LimIE9bmAVjtr8JphEIy56yC9gq eeR9Wn18TYKUy7gYRN/VLG30VU/4Sr1xxz3B7o+LdNJuDrg1TpnKCBGotyjy/H/tmMWj +BeNoyJka71/K/VNTkuTC2rUUUBmLvjKx/E86Ooif8xU9oNRSMbDY7j8d0VCT4RlwCi9 CB5EQ3SpIfmOqGVDmqGAJVmaCcxq0o3mTT3VwjjWxvQQkvsZ+9Fov52PR9LbyzmXGJ1d Z6xw== X-Gm-Message-State: AOJu0YxLCkTQV9NifwLaV+IO/eybAViexT9FnOciMe4ca+LT7TyQMTIh TXI5iR3RhY0KdhyGQwgDZP5MEokFAHmahQ+nYqPrxoNDrEDu3eHc/dbcoH0ubS49e/lnePU9tHV KJ4TzIxEwcMOk8w== X-Google-Smtp-Source: AGHT+IGjH/Ff2xSwSbT5vioutXPHVy7bJJQ3zau2ipxnPdyw1mlt022gceufBKa+5P6ckJhvnWfDj7xA0hX24cM= X-Received: from aliceryhl2.c.googlers.com ([fda3:e722:ac3:cc00:68:949d:c0a8:572]) (user=aliceryhl job=sendgmr) by 2002:a81:4ec3:0:b0:5ff:5135:ffcf with SMTP id c186-20020a814ec3000000b005ff5135ffcfmr211243ywb.4.1706095251533; Wed, 24 Jan 2024 03:20:51 -0800 (PST) Date: Wed, 24 Jan 2024 11:20:22 +0000 In-Reply-To: <20240124-alice-mm-v1-0-d1abcec83c44@google.com> Mime-Version: 1.0 References: <20240124-alice-mm-v1-0-d1abcec83c44@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=9645; i=aliceryhl@google.com; h=from:subject:message-id; bh=vymGv46wM2XordLIDJAiR3VCTZsi5VESqwv8kajL3uk=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBlsPKJ3p0VhgMk8AKpCRyFubtzOao8ccYGTzkG+ +NZ+y3/eiCJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZbDyiQAKCRAEWL7uWMY5 RokAD/0TXFTgaEHIwdd+VZdCdJ3hpjT/gV/L0H4iqM+pzhwCg2FAbC2zAQUnoY5eSMPfazzRrkR b7I/ZDPT0xZAd2e+TLeiKEJSenYcP0bpVFnQjP1H8QJV1Y53i7lXVV/q8dpAXQ6fbbZvB98Pv06 Bky6MCWbbb8kgmCH14Jv5Vgxa3P83UrVdNlNE7XQu6dgORciRLcuuLi819gqMmRrVAUs1WMceGx t38dHV0F9KRYj2YIEebTN8BNvR+fxrT7RNu3SC6TAjA2p+V0TT4fl7thO2VlttP+UStglznTOL+ jkjEwOGg3VN9XVEPGbnODkUE6L+Pzhgb469AoYErW9epYGP6Com15uAiEsOA9IUmx82YYAopg8N jbVf/P67GLvzss8LprRUNrVQy4veFRwaZhoxrpJ3q7c2QIuDOfVgIXd59tJ+0+pV2KOIyL8F1Is MYjacF8CeI1cKXEUEx0SigY3tSQogrXTPRTb0XFs0yiviDd/mFnWSxCgzobtlYKJNTTcO50saJp 0uWo2Ni9PiyzXnZacb/bkx/GS1bE1V44v3tTlxbDTa2Djy2Cz6/TUEm2ntReBxDgepV4K/XgJnq 2yu18Jm11EpkRMTkio2ISHW42srkLhYsD9yZRMARFPWHqrjuZVamQIWBYq/rhZDz0wL8t5qOVb9 hSf4DHbGeXA8Tfw== X-Mailer: b4 0.13-dev-26615 Message-ID: <20240124-alice-mm-v1-2-d1abcec83c44@google.com> Subject: [PATCH 2/3] rust: add typed accessors for userspace pointers From: Alice Ryhl To: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , " =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= " , Benno Lossin , Andreas Hindborg , Kees Cook , Al Viro , Andrew Morton Cc: Greg Kroah-Hartman , " =?utf-8?q?Arve_Hj?= =?utf-8?q?=C3=B8nnev=C3=A5g?= " , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Christian Brauner X-Rspamd-Queue-Id: 8570040011 X-Rspam-User: X-Stat-Signature: thh6ox7g15osw8g15wadqb6oqsmt471q X-Rspamd-Server: rspam01 X-HE-Tag: 1706095252-724394 X-HE-Meta: U2FsdGVkX19Vnmn5p9j0ROEhfALNB8i8dWw6GJA6+YhB/ONKHkKoqVLFnVm/Abk20B5Nwx/6XZhSBI2NsTuyng+XnSxxKDa44WTdkFH0Ucyj5AzTvCc8bOgqRZ8gEmUmw5NqTweMkT9jwB+j+pe2ncAC5tSvAELQgnfeU7Yl29MfrDaD7E7niIRxtVooBQFB1pbE0j/Cx4V7X3gztn2PtnVyM9kX1V+gS5hs0irdX/tXafLtNMEJYaQK7b3p2QdIPMGvjvVK76iUJ1Oy2irXA5E9/CHvbb6HlKOnYA81DE383L4A7cXliQMpr4KzTrfQMINSY2qKgC1nw3vqFS04RZpOBlk2IcasmAt4E8p4Qhv8rbYu8wcaD7F1RjKgwmvMi/OTK81NU7EAn8hs9JhBhVe/TB+vJpLGVVzxCwhgjXbJhq176Q7+6WTYaP8XrNcO2Sgfe9HkGqUKliA2iktbkpNFDFxTzmKK8Ik1OqwHLUS12ZfZ9e524kRIndbY1XEqQX9D64l+xlLoFGHBuHtflSgO3DNZwXM5O4TSbUSgANah9So0Q4Xr0pVOE2AHZKqxS+SRXvLMwxIoZzOh3Ly+LLRgSyfGr0luB2H8ngRbDX0JuR/EueGRR10ktD4OtVsE+kCIM5/abcjfXfAyCU79IirNTD12xa4KZHa+Us2RE9Z3o82MzZUJ5aqY0xPM0bm2K6uyxlvJ+XMlrudaAJAiy5T3oZYzJ5Xkg3+KH1ysYHqtmKkrfM7uZ1mTmnBpsMLLcB6pZakne3l9zc9QlyYDjADCDhnOKzzuRXf2AjBgOjsowj4g6OAfMIjNs9+Oa1yHRwETfETOz/yn4v5WKNc+3k4elJeff53nUIZ8W+ZxczQGAmi2ho0SM1VA+SJRql0d1UNf4QicOATss4aqrvwJgazrxUVStNKm+zWK2M7gkLwFRllLC6mFRdh7ERz6bEQdYItzN/A7nvLsKaSCX/0 7SjnnJqt ryTzkQ+Jk/2LoARPhO+npKdB+CpILP9q0Hn5mUYcJq3HzoCXne1cqzDnf5mafRqHDL/WO3QAyYu/V/Qf8g2OY11gZoozZRHFtnjuzIY3tPekyIFJbc4vnq5P9EYFxmv23KWH+sm61Uf5VuL0fPl/G2CddmHfIs9yJ1FBvIYxsy5g+hhujKgDfASwXajtPGguWhbIjol47VCz6XsYJcgEgUJOx1BIOH6pDUV2X52dH0uFldRVwuWEeL9B4GIKaYKAgTSBPxgOwBu7zs/rm7VbE41cOE6HUWeBe6Rzz64vHr5T8ShoNU0J8ZOj2fleUEO/NpzKAuuoaAd6ofDBKIjYe2NyuB0vtKI5HY7yD6x2g5JgJJiw38gvo69tADS5ComlM4YJgk2vChXsa2mfJRHosrgtdwBYohhxyEyzPcI1hqVUGArXoK8jaxXToQISmAF5yNwLGD5II1LMH37YmhNT1PQD68QH6HB9Ql6Yqfrlv6u4dy/k11ZeQNDFiskYFddf3wSx32kVEQGDvRIv/A62lIuZe9n45FlxOULMb0tcymL0BOQcruQPqvHU3qE0Q6ORN6GPj0wg4HwDk6sdIjicpMiKYIKzzuCTetFcV4TOYuahAbUCXw240j2aWrSF+cxbAhgDMyJLbIHRltjhP3Ux6XCTIi03yuEI0gfmIi1owRP7VJm14XHNRvkj8gCyOxKlN45zkIzdsfug129DbcPaPQZadpdPbdoGuBYVWKWEMuonWZRk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add safe methods for reading and writing Rust values to and from userspace pointers. The C methods for copying to/from userspace use a function called `check_object_size` to verify that the kernel pointer is not dangling. However, this check is skipped when the length is a compile-time constant, with the assumption that such cases trivially have a correct kernel pointer. In this patch, we apply the same optimization to the typed accessors. For both methods, the size of the operation is known at compile time to be size_of of the type being read or written. Since the C side doesn't provide a variant that skips only this check, we create custom helpers for this purpose. The majority of reads and writes to userspace pointers in the Rust Binder driver uses these accessor methods. Benchmarking has found that skipping the `check_object_size` check makes a big difference for the cases being skipped here. (And that the check doesn't make a difference for the cases that use the raw read/write methods.) This code is based on something that was originally written by Wedson on the old rust branch. It was modified by Alice to skip the `check_object_size` check, and to update various comments, including the notes about kernel pointers in `WritableToBytes`. Co-developed-by: Wedson Almeida Filho Signed-off-by: Wedson Almeida Filho Signed-off-by: Alice Ryhl Signed-off-by: Arnd Bergmann --- rust/helpers.c | 34 +++++++++++++ rust/kernel/user_ptr.rs | 125 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 159 insertions(+) diff --git a/rust/helpers.c b/rust/helpers.c index 312b6fcb49d5..187f445fbf19 100644 --- a/rust/helpers.c +++ b/rust/helpers.c @@ -52,6 +52,40 @@ unsigned long rust_helper_copy_to_user(void __user *to, const void *from, } EXPORT_SYMBOL_GPL(rust_helper_copy_to_user); +/* + * These methods skip the `check_object_size` check that `copy_[to|from]_user` + * normally performs. In C, these checks are skipped whenever the length is a + * compile-time constant, since when that is the case, the kernel pointer + * usually points at a local variable that is being initialized and the kernel + * pointer is trivially non-dangling. + * + * These helpers serve the same purpose in Rust. Whenever the length is known at + * compile-time, we call this helper to skip the check. + */ +unsigned long rust_helper_copy_from_user_unsafe_skip_check_object_size(void *to, const void __user *from, unsigned long n) +{ + unsigned long res; + + might_fault(); + instrument_copy_from_user_before(to, from, n); + if (should_fail_usercopy()) + return n; + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); + return res; +} +EXPORT_SYMBOL_GPL(rust_helper_copy_from_user_unsafe_skip_check_object_size); + +unsigned long rust_helper_copy_to_user_unsafe_skip_check_object_size(void __user *to, const void *from, unsigned long n) +{ + might_fault(); + if (should_fail_usercopy()) + return n; + instrument_copy_to_user(to, from, n); + return raw_copy_to_user(to, from, n); +} +EXPORT_SYMBOL_GPL(rust_helper_copy_to_user_unsafe_skip_check_object_size); + void rust_helper_mutex_lock(struct mutex *lock) { mutex_lock(lock); diff --git a/rust/kernel/user_ptr.rs b/rust/kernel/user_ptr.rs index 00aa26aa6a83..daa46abe5525 100644 --- a/rust/kernel/user_ptr.rs +++ b/rust/kernel/user_ptr.rs @@ -11,6 +11,7 @@ use crate::{bindings, error::code::*, error::Result}; use alloc::vec::Vec; use core::ffi::{c_ulong, c_void}; +use core::mem::{size_of, MaybeUninit}; /// The maximum length of a operation using `copy_[from|to]_user`. /// @@ -151,6 +152,36 @@ pub unsafe fn read_raw(&mut self, out: *mut u8, len: usize) -> Result { Ok(()) } + /// Reads a value of the specified type. + /// + /// Fails with `EFAULT` if the read encounters a page fault. + pub fn read(&mut self) -> Result { + if size_of::() > self.1 || size_of::() > MAX_USER_OP_LEN { + return Err(EFAULT); + } + let mut out: MaybeUninit = MaybeUninit::uninit(); + // SAFETY: The local variable `out` is valid for writing `size_of::()` bytes. + let res = unsafe { + bindings::copy_from_user_unsafe_skip_check_object_size( + out.as_mut_ptr().cast::(), + self.0, + size_of::() as c_ulong, + ) + }; + if res != 0 { + return Err(EFAULT); + } + // Since this is not a pointer to a valid object in our program, + // we cannot use `add`, which has C-style rules for defined + // behavior. + self.0 = self.0.wrapping_add(size_of::()); + self.1 -= size_of::(); + // SAFETY: The read above has initialized all bytes in `out`, and since + // `T` implements `ReadableFromBytes`, any bit-pattern is a valid value + // for this type. + Ok(unsafe { out.assume_init() }) + } + /// Reads all remaining data in the buffer into a vector. /// /// Fails with `EFAULT` if the read encounters a page fault. @@ -219,4 +250,98 @@ pub fn write_slice(&mut self, data: &[u8]) -> Result { // `len`, so the pointer is valid for reading `len` bytes. unsafe { self.write_raw(ptr, len) } } + + /// Writes the provided Rust value to this userspace pointer. + /// + /// Fails with `EFAULT` if the write encounters a page fault. + pub fn write(&mut self, value: &T) -> Result { + if size_of::() > self.1 || size_of::() > MAX_USER_OP_LEN { + return Err(EFAULT); + } + // SAFETY: The reference points to a value of type `T`, so it is valid + // for reading `size_of::()` bytes. + let res = unsafe { + bindings::copy_to_user_unsafe_skip_check_object_size( + self.0, + (value as *const T).cast::(), + size_of::() as c_ulong, + ) + }; + if res != 0 { + return Err(EFAULT); + } + // Since this is not a pointer to a valid object in our program, + // we cannot use `add`, which has C-style rules for defined + // behavior. + self.0 = self.0.wrapping_add(size_of::()); + self.1 -= size_of::(); + Ok(()) + } } + +/// Specifies that a type is safely readable from bytes. +/// +/// Not all types are valid for all values. For example, a `bool` must be either +/// zero or one, so reading arbitrary bytes into something that contains a +/// `bool` is not okay. +/// +/// It's okay for the type to have padding, as initializing those bytes has no +/// effect. +/// +/// # Safety +/// +/// All bit-patterns must be valid for this type. +pub unsafe trait ReadableFromBytes {} + +// SAFETY: All bit patterns are acceptable values of the types below. +unsafe impl ReadableFromBytes for u8 {} +unsafe impl ReadableFromBytes for u16 {} +unsafe impl ReadableFromBytes for u32 {} +unsafe impl ReadableFromBytes for u64 {} +unsafe impl ReadableFromBytes for usize {} +unsafe impl ReadableFromBytes for i8 {} +unsafe impl ReadableFromBytes for i16 {} +unsafe impl ReadableFromBytes for i32 {} +unsafe impl ReadableFromBytes for i64 {} +unsafe impl ReadableFromBytes for isize {} +// SAFETY: If all bit patterns are acceptable for individual values in an array, +// then all bit patterns are also acceptable for arrays of that type. +unsafe impl ReadableFromBytes for [T] {} +unsafe impl ReadableFromBytes for [T; N] {} + +/// Specifies that a type is safely writable to bytes. +/// +/// If a struct implements this trait, then it is okay to copy it byte-for-byte +/// to userspace. This means that it should not have any padding, as padding +/// bytes are uninitialized. Reading uninitialized memory is not just undefined +/// behavior, it may even lead to leaking sensitive information on the stack to +/// userspace. +/// +/// The struct should also not hold kernel pointers, as kernel pointer addresses +/// are also considered sensitive. However, leaking kernel pointers is not +/// considered undefined behavior by Rust, so this is a correctness requirement, +/// but not a safety requirement. +/// +/// # Safety +/// +/// Values of this type may not contain any uninitialized bytes. +pub unsafe trait WritableToBytes {} + +// SAFETY: Instances of the following types have no uninitialized portions. +unsafe impl WritableToBytes for u8 {} +unsafe impl WritableToBytes for u16 {} +unsafe impl WritableToBytes for u32 {} +unsafe impl WritableToBytes for u64 {} +unsafe impl WritableToBytes for usize {} +unsafe impl WritableToBytes for i8 {} +unsafe impl WritableToBytes for i16 {} +unsafe impl WritableToBytes for i32 {} +unsafe impl WritableToBytes for i64 {} +unsafe impl WritableToBytes for isize {} +unsafe impl WritableToBytes for bool {} +unsafe impl WritableToBytes for char {} +unsafe impl WritableToBytes for str {} +// SAFETY: If individual values in an array have no uninitialized portions, then +// the the array itself does not have any uninitialized portions either. +unsafe impl WritableToBytes for [T] {} +unsafe impl WritableToBytes for [T; N] {}