From patchwork Thu Apr 18 08:59:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alice Ryhl X-Patchwork-Id: 13634363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BC97C04FFF for ; Thu, 18 Apr 2024 08:59:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 94FF26B0092; Thu, 18 Apr 2024 04:59:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 900F86B0093; Thu, 18 Apr 2024 04:59:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 77AED6B0095; Thu, 18 Apr 2024 04:59:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 4F07C6B0092 for ; Thu, 18 Apr 2024 04:59:37 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 014D7141299 for ; Thu, 18 Apr 2024 08:59:36 +0000 (UTC) X-FDA: 82022054394.26.D04E071 Received: from mail-lf1-f74.google.com (mail-lf1-f74.google.com [209.85.167.74]) by imf09.hostedemail.com (Postfix) with ESMTP id D3749140021 for ; Thu, 18 Apr 2024 08:59:34 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=nz18jc3v; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 39eAgZgkKCHIQbYSUhoXbWeeWbU.SecbYdkn-ccalQSa.ehW@flex--aliceryhl.bounces.google.com designates 209.85.167.74 as permitted sender) smtp.mailfrom=39eAgZgkKCHIQbYSUhoXbWeeWbU.SecbYdkn-ccalQSa.ehW@flex--aliceryhl.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713430775; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aOneHqkzjWbsXPBG08c1iIXkahak8VOduk+9lYv05Ps=; b=yglNFqBPbxDDnV1QxbjDC5zAdZgisuwk3JtPf8H1gRvsAGxbRRZ6iGZlpO+S7EuvKhqAnL SpxnwoHKFLDRd/jomAqllCvV24KlBtlK0Hy//K+bUIB/824UTa+NnLH5MdNjqB94S+qh/2 bj7tWCp/rpKAtTjvVN/6r0bX6R0xMnY= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=nz18jc3v; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 39eAgZgkKCHIQbYSUhoXbWeeWbU.SecbYdkn-ccalQSa.ehW@flex--aliceryhl.bounces.google.com designates 209.85.167.74 as permitted sender) smtp.mailfrom=39eAgZgkKCHIQbYSUhoXbWeeWbU.SecbYdkn-ccalQSa.ehW@flex--aliceryhl.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713430775; a=rsa-sha256; cv=none; b=cp9rkSrI6HbC6kvwRuOBezqmPgEOgwzcowBdlF0FU3pX+dJfxRxLXb/nJ4wQVMc0S9xIvb vYA6gvw4JAoG8J70IHB2eUoIRT6+m2y85WcW/O5gfuwcRwjfS+SkM5X9qDm6VYHZ2/6MX0 mJO4ueZaqzeLi5iguUSU4z6CXsemfok= Received: by mail-lf1-f74.google.com with SMTP id 2adb3069b0e04-516c2460d95so434757e87.0 for ; Thu, 18 Apr 2024 01:59:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713430773; x=1714035573; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aOneHqkzjWbsXPBG08c1iIXkahak8VOduk+9lYv05Ps=; b=nz18jc3v91Dki/H76LC5mJyM8R8IUVrE1sctpSLfHzrUUyfO9xp4hHBWy1cG3t/8QT n/VH3P5+pnnMzzFJlwapxITPCnTpOIV0N1uGT0ggJ5/zEEDasi2wQx86glTWY0rZuNnF WSPEHTyJmuvhiZrJmjyqbcFkIoYXTFpC/c9Sce20Cy1zEjAWrd8SZmwd34s0dRXeeI0g e18BR5XyqiMEppQS5a3A3+fDuS/2fDZTnARqd2w5EPUE3GJasmaD2xpyesCuqCYdX/v5 hHkRAC6xC83TBpOTyD9zibqcPX4oWGkUPstpEfu+n2J7iHlzGOzcFeGepnRacpO70c5H XLGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713430773; x=1714035573; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aOneHqkzjWbsXPBG08c1iIXkahak8VOduk+9lYv05Ps=; b=byztoFzgVQDHMFdO/E/H0ikNixoifSNzIJZFgXWFtyxlnnGN50U7mXKBQCjMQWiddY NvKCGdJ1RHTXOqOR6NeJUxC+leo6nNkkg0geqIsO5mrG30nJRg1H6tVHCn9ps0ESxSqD XsDjCZshITTzaFdE6uaTrKULpo80IAVflW9Bn/fUOT4uv9C8WbgzAeDe50iZ/u7SiB6c FoEbdiEChCCT52pb5pA8g3CGyzfkQBfgCeQ7xuWw0GGgUUCdPaTcI6Ol5w1wb/sDjY0N AaHjm4UYJKd7D4kq8vuRJTX5O482UqDUhjeUA2EqGGsXd7MCCLhB1xHvtTknO4LMoXFh Vdng== X-Forwarded-Encrypted: i=1; AJvYcCXUBTfA/JAoazRw2znCmtjvH6txB4ppfg6+xdjW2jbadkHHUstTVMdhPTwiEpCn8BFudrAFYaWg0SiteFVxl1a5iyc= X-Gm-Message-State: AOJu0Yz2Keg/FG24lRJb/k7z6v1cEKUCcgs19Way8UO0RUO+lyQNxfI+ E7q8VHnUKcYEea5/EU2o5OH2Zs/VEDVkVcKh0RNFP2GrJkRovaM5eIew/veFXxJfeljmeIE8jkq u1YfOXR82Nxhmhg== X-Google-Smtp-Source: AGHT+IHAzlRE+T5ZNzdf0n0yMnp0TRPLdqGfxRE+ai5i4Jw9MhTAF5YG02+gMQDX3d5VCntdRnGFyCJrt8qMyG4= X-Received: from aliceryhl2.c.googlers.com ([fda3:e722:ac3:cc00:68:949d:c0a8:572]) (user=aliceryhl job=sendgmr) by 2002:ac2:5f63:0:b0:518:b58c:5232 with SMTP id c3-20020ac25f63000000b00518b58c5232mr1940lfc.10.1713430773184; Thu, 18 Apr 2024 01:59:33 -0700 (PDT) Date: Thu, 18 Apr 2024 08:59:17 +0000 In-Reply-To: <20240418-alice-mm-v6-0-cb8f3e5d688f@google.com> Mime-Version: 1.0 References: <20240418-alice-mm-v6-0-cb8f3e5d688f@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=15853; i=aliceryhl@google.com; h=from:subject:message-id; bh=dZfQOYQEGgz9vrwD+/QIrkBZiZV+Ak86iv4sMvdKcZY=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBmIODtO+zgTchi2fWbmMoR8hezbeVKozC/rxnIH Fk8vU/95riJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZiDg7QAKCRAEWL7uWMY5 RvLND/42zY+nfrVdUZqjFKVr1NuzgOb6/IvXeHq8K+LMoavlpt6BGSIPiUfSuaEFgRDjkzDpnjt ucFzGZDWx6oCN/4lstzEucyIjWNxnR48QfRlZhM/OrvCiw2HR0yjRmKc83kWGacLIaNtQeePU+6 zwOH7xKJmrrUh/KA9KLZFOhgifzCeiRaCIqheoNh3M7q/KA8aRED9PBf6be+GRnfSlA/fdfrT9s 6UWvWZtr2ONv7R3Trh/klpANVtgEShr8lVmV5t0EQEmp+8Ni+DgpdhBn04FmrnBcSCKN1g7Dpo/ VlOlIKuleHV8Ja5xqk005npof0R1Xs9dT5lJREWO/ufD+UDt37h9CbQFbrmFG6PIC4/1vNvW5dm S6oepiulmGFRbMYLUCe3ujygDWhRct5X+VJow266U4Fvz+BbBL4YpfXpMDHCyJatO9c4+QqMv9c CjEcwZCayTr8lOF9RV10UhJ5dnJ49KLVrG8b8qCth/WADCcxVeSSKlHPWKBNgzTm75LMVpkJULF sKeTd64ya0OhpmgqM5S8jN0fmd9WrghxB12UAbY4sAYlkcvZbleRXowd9WuisyblzKNQCke5IBp YRtVnXcWpzYcAqRf7AeDVZXTxNJVCczFb7UH+M6PgYy1dktDn0ZT9mZBpjg2+LHk/eT2d/B7kRm pkb+sg3dqEm2CEg== X-Mailer: b4 0.13-dev-26615 Message-ID: <20240418-alice-mm-v6-1-cb8f3e5d688f@google.com> Subject: [PATCH v6 1/4] rust: uaccess: add userspace pointers From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Al Viro , Andrew Morton , Kees Cook Cc: Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , " =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= " , Benno Lossin , Andreas Hindborg , Greg Kroah-Hartman , " =?utf-8?q?Arve_Hj=C3=B8?= =?utf-8?q?nnev=C3=A5g?= " , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , Trevor Gross , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Christian Brauner X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: D3749140021 X-Rspam-User: X-Stat-Signature: jfq553ucq3pi35nbpy4wtt58jisjib1y X-HE-Tag: 1713430774-960451 X-HE-Meta: U2FsdGVkX1+QIuCpi/DJUUb0CkQguOQgeWCfs4KGqxfzAU/FEg2pB/PiQ3Sm8x3PIwWoGclaoQMLQ2jlZMUaR2jNcqSsHpTytkS9WnH5FqhtatCPoC2iOdTDtRuRt6dnkwBCbEk8o9GwRbWY3Dvrh5GROPbt1jzslMuZKwVjE0dN5j+0p9ELI2YG4N2oaou+c3BpzU1046f7pp+kDGp+mFV/7/MVsP6ootQnKZFw833biVkpKAwLUy/10shaGzk8wgUSv60XFIHy3E3/SRWGdCWF1N0BOOpOXMIVsikuPirYst16u5gKiVn2fRXW2gA4Bl5a05bDF91C9MxVBOl+shobjSdnAoDROJSmnapCzFOiK6gU5B1kKCzbY1KfDqsyZWdRSnCZvS41Jll2Xpct9ndRgAZznTvXOu3lWFFvcpJ2phEK5vWn0M48QrDj5WcO2/3b47wlSBFnQ0SVVFwOUVv79L4oX6jmy90vEvbipZXAmb69CYC8weYF3Vl5IAEnCTWCxdseXfbyjEO1aWZbIEAsxXsv8n/CWrO4dc7kn1/99bDKdShthGZmJWFqsozBVf9+JXlIlBFcb1oFCfK/AdLI7I56n0CFRE8Klr0HWFIh4oElssZyiZ4J11mLtd9C5uxAoAhgJI1kcBI/cAzDsFlLkybkeZMc/apsD1Ww+rUO5NGFeCWF5iW/eEM+mELzwly3q/Ni0fsJLxQpkZPJ4cl7VfFtgaTy8RWDi4Cl++P2scZ4+rfZJVsrMzinDQ+1MGzDpKspJvxSUIKxQzt/N2NLhClFv5VffvU2xjDefM1lGX04I/QmA+brgXkjad4JNdoZxsBLBbgusleANk6qb1VWodmoDR7GOlO9pWke10swHjTQOYacuLqLzsG04E2IzYdT+fQnLIzEyV7lGmC6sVTClSpFeYfAHpHVPaZaQXAWxcZ7A4N+wsWxRqktB5o24SGrTwMHzx8EQT2Po8o bRbmLAcc 7jSHD6plyLgtIEl+3XGaVR18EB97uMvCOi2y5JNzUUsXs4Qwce10XPFO19RNQzN03wpp6w56TkuryIFc3HyewPGOpq03tVlprnDJetFJVD11LSaUf69PF3MnFyCGgRcF2vYQ2yNcW+pTwh+MJUp26QPkoELQa4eSycyvcIox6P6tiU5VaE8Xz6ULW2XSfPUWTCdp10zBvU3W8cw1WP4xeIWDll9AGYK9/M+Bze08WMMdAuAWVTR9prZTuRDas6P9MYZ40bYJ6jWC/zmpebEMcaKwptsbY+QoPJeaZimLOnXZK45YOskERwUqRNFecKGb5NQaWFpHM1VSuKe2jhHtF6U51jY8MNXM+S603dTpnolmQrMa4R94mYNmvcmQpJXWvvUot2cKRcWiEXABNtzcRt3v5mCRA6xcdkkOXA60ZEfLSkoMdXTQxc2kwfN6/LV/U2itw7ZHCU1ZJ8aYgP1UWRUHxoZBhVZXi3pjUz4UOZ8tY/roFl+2whNNt8QH++HF7Gx+4ONU3BZ12OFm29YgOD6qxK4U/2AK8T8/NybIZyz0QKe1eCmlvvIjBJqAVp8CJHfYEKGch3cem2gLiISIEeIRXcK33QfCBUu+Q5CjO+57Ajgdr0D0MD+ZOvinT1InCBw9kU2nvFdewMSUqFe4Z0sFwRFZtCXFZYJSHC3oUPMueSKn3MAecxv1slTJGdjKUuqRiN5LOF4+3Tbr2oCIdqMuG1TuVbu+DYjlhpM65Zi/kGV+6kwfKLh00+DknzUWPy+7m1GvbbaA7ctpcNUNkF0kuiBQKTpY8f6dk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Wedson Almeida Filho A pointer to an area in userspace memory, which can be either read-only or read-write. All methods on this struct are safe: attempting to read or write on bad addresses (either out of the bound of the slice or unmapped addresses) will return `EFAULT`. Concurrent access, *including data races to/from userspace memory*, is permitted, because fundamentally another userspace thread/process could always be modifying memory at the same time (in the same way that userspace Rust's `std::io` permits data races with the contents of files on disk). In the presence of a race, the exact byte values read/written are unspecified but the operation is well-defined. Kernelspace code should validate its copy of data after completing a read, and not expect that multiple reads of the same address will return the same value. These APIs are designed to make it difficult to accidentally write TOCTOU bugs. Every time you read from a memory location, the pointer is advanced by the length so that you cannot use that reader to read the same memory location twice. Preventing double-fetches avoids TOCTOU bugs. This is accomplished by taking `self` by value to prevent obtaining multiple readers on a given `UserSlice`, and the readers only permitting forward reads. If double-fetching a memory location is necessary for some reason, then that is done by creating multiple readers to the same memory location. Constructing a `UserSlice` performs no checks on the provided address and length, it can safely be constructed inside a kernel thread with no current userspace process. Reads and writes wrap the kernel APIs `copy_from_user` and `copy_to_user`, which check the memory map of the current process and enforce that the address range is within the user range (no additional calls to `access_ok` are needed). This code is based on something that was originally written by Wedson on the old rust branch. It was modified by Alice by removing the `IoBufferReader` and `IoBufferWriter` traits, and various other changes. Signed-off-by: Wedson Almeida Filho Co-developed-by: Alice Ryhl Reviewed-by: Benno Lossin Reviewed-by: Trevor Gross Reviewed-by: Boqun Feng Signed-off-by: Alice Ryhl --- rust/helpers.c | 14 +++ rust/kernel/lib.rs | 1 + rust/kernel/uaccess.rs | 313 +++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 328 insertions(+) diff --git a/rust/helpers.c b/rust/helpers.c index 70e59efd92bc..312b6fcb49d5 100644 --- a/rust/helpers.c +++ b/rust/helpers.c @@ -38,6 +38,20 @@ __noreturn void rust_helper_BUG(void) } EXPORT_SYMBOL_GPL(rust_helper_BUG); +unsigned long rust_helper_copy_from_user(void *to, const void __user *from, + unsigned long n) +{ + return copy_from_user(to, from, n); +} +EXPORT_SYMBOL_GPL(rust_helper_copy_from_user); + +unsigned long rust_helper_copy_to_user(void __user *to, const void *from, + unsigned long n) +{ + return copy_to_user(to, from, n); +} +EXPORT_SYMBOL_GPL(rust_helper_copy_to_user); + void rust_helper_mutex_lock(struct mutex *lock) { mutex_lock(lock); diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs index 9a943d99c71a..7ee807ae4680 100644 --- a/rust/kernel/lib.rs +++ b/rust/kernel/lib.rs @@ -45,6 +45,7 @@ pub mod task; pub mod time; pub mod types; +pub mod uaccess; pub mod workqueue; #[doc(hidden)] diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs new file mode 100644 index 000000000000..ee5623d7b98f --- /dev/null +++ b/rust/kernel/uaccess.rs @@ -0,0 +1,313 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Slices to user space memory regions. +//! +//! C header: [`include/linux/uaccess.h`](srctree/include/linux/uaccess.h) + +use crate::{alloc::Flags, bindings, error::Result, prelude::*}; +use alloc::vec::Vec; +use core::ffi::{c_ulong, c_void}; +use core::mem::MaybeUninit; + +/// The type used for userspace addresses. +pub type UserPtr = usize; + +/// A pointer to an area in userspace memory, which can be either read-only or read-write. +/// +/// All methods on this struct are safe: attempting to read or write on bad addresses (either out of +/// the bound of the slice or unmapped addresses) will return `EFAULT`. Concurrent access, +/// *including data races to/from userspace memory*, is permitted, because fundamentally another +/// userspace thread/process could always be modifying memory at the same time (in the same way that +/// userspace Rust's [`std::io`] permits data races with the contents of files on disk). In the +/// presence of a race, the exact byte values read/written are unspecified but the operation is +/// well-defined. Kernelspace code should validate its copy of data after completing a read, and not +/// expect that multiple reads of the same address will return the same value. +/// +/// These APIs are designed to make it difficult to accidentally write TOCTOU (time-of-check to +/// time-of-use) bugs. Every time a memory location is read, the reader's position is advanced by +/// the read length and the next read will start from there. This helps prevent accidentally reading +/// the same location twice and causing a TOCTOU bug. +/// +/// Creating a [`UserSliceReader`] and/or [`UserSliceWriter`] consumes the `UserSlice`, helping +/// ensure that there aren't multiple readers or writers to the same location. +/// +/// If double-fetching a memory location is necessary for some reason, then that is done by creating +/// multiple readers to the same memory location, e.g. using [`clone_reader`]. +/// +/// # Examples +/// +/// Takes a region of userspace memory from the current process, and modify it by adding one to +/// every byte in the region. +/// +/// ```no_run +/// use alloc::vec::Vec; +/// use core::ffi::c_void; +/// use kernel::error::Result; +/// use kernel::uaccess::{UserPtr, UserSlice}; +/// +/// fn bytes_add_one(uptr: UserPtr, len: usize) -> Result<()> { +/// let (read, mut write) = UserSlice::new(uptr, len).reader_writer(); +/// +/// let mut buf = Vec::new(); +/// read.read_all(&mut buf, GFP_KERNEL)?; +/// +/// for b in &mut buf { +/// *b = b.wrapping_add(1); +/// } +/// +/// write.write_slice(&buf)?; +/// Ok(()) +/// } +/// ``` +/// +/// Example illustrating a TOCTOU (time-of-check to time-of-use) bug. +/// +/// ```no_run +/// use alloc::vec::Vec; +/// use core::ffi::c_void; +/// use kernel::error::{code::EINVAL, Result}; +/// use kernel::uaccess::{UserPtr, UserSlice}; +/// +/// /// Returns whether the data in this region is valid. +/// fn is_valid(uptr: UserPtr, len: usize) -> Result { +/// let read = UserSlice::new(uptr, len).reader(); +/// +/// let mut buf = Vec::new(); +/// read.read_all(&mut buf, GFP_KERNEL)?; +/// +/// todo!() +/// } +/// +/// /// Returns the bytes behind this user pointer if they are valid. +/// fn get_bytes_if_valid(uptr: UserPtr, len: usize) -> Result> { +/// if !is_valid(uptr, len)? { +/// return Err(EINVAL); +/// } +/// +/// let read = UserSlice::new(uptr, len).reader(); +/// +/// let mut buf = Vec::new(); +/// read.read_all(&mut buf, GFP_KERNEL)?; +/// +/// // THIS IS A BUG! The bytes could have changed since we checked them. +/// // +/// // To avoid this kind of bug, don't call `UserSlice::new` multiple +/// // times with the same address. +/// Ok(buf) +/// } +/// ``` +/// +/// [`std::io`]: https://doc.rust-lang.org/std/io/index.html +/// [`clone_reader`]: UserSliceReader::clone_reader +pub struct UserSlice { + ptr: UserPtr, + length: usize, +} + +impl UserSlice { + /// Constructs a user slice from a raw pointer and a length in bytes. + /// + /// Constructing a [`UserSlice`] performs no checks on the provided address and length, it can + /// safely be constructed inside a kernel thread with no current userspace process. Reads and + /// writes wrap the kernel APIs `copy_from_user` and `copy_to_user`, which check the memory map + /// of the current process and enforce that the address range is within the user range (no + /// additional calls to `access_ok` are needed). Validity of the pointer is checked when you + /// attempt to read or write, not in the call to `UserSlice::new`. + /// + /// Callers must be careful to avoid time-of-check-time-of-use (TOCTOU) issues. The simplest way + /// is to create a single instance of [`UserSlice`] per user memory block as it reads each byte + /// at most once. + pub fn new(ptr: UserPtr, length: usize) -> Self { + UserSlice { ptr, length } + } + + /// Reads the entirety of the user slice, appending it to the end of the provided buffer. + /// + /// Fails with `EFAULT` if the read happens on a bad address. + pub fn read_all(self, buf: &mut Vec, flags: Flags) -> Result { + self.reader().read_all(buf, flags) + } + + /// Constructs a [`UserSliceReader`]. + pub fn reader(self) -> UserSliceReader { + UserSliceReader { + ptr: self.ptr, + length: self.length, + } + } + + /// Constructs a [`UserSliceWriter`]. + pub fn writer(self) -> UserSliceWriter { + UserSliceWriter { + ptr: self.ptr, + length: self.length, + } + } + + /// Constructs both a [`UserSliceReader`] and a [`UserSliceWriter`]. + /// + /// Usually when this is used, you will first read the data, and then overwrite it afterwards. + pub fn reader_writer(self) -> (UserSliceReader, UserSliceWriter) { + ( + UserSliceReader { + ptr: self.ptr, + length: self.length, + }, + UserSliceWriter { + ptr: self.ptr, + length: self.length, + }, + ) + } +} + +/// A reader for [`UserSlice`]. +/// +/// Used to incrementally read from the user slice. +pub struct UserSliceReader { + ptr: UserPtr, + length: usize, +} + +impl UserSliceReader { + /// Skip the provided number of bytes. + /// + /// Returns an error if skipping more than the length of the buffer. + pub fn skip(&mut self, num_skip: usize) -> Result { + // Update `self.length` first since that's the fallible part of this operation. + self.length = self.length.checked_sub(num_skip).ok_or(EFAULT)?; + self.ptr = self.ptr.wrapping_add(num_skip); + Ok(()) + } + + /// Create a reader that can access the same range of data. + /// + /// Reading from the clone does not advance the current reader. + /// + /// The caller should take care to not introduce TOCTOU issues, as described in the + /// documentation for [`UserSlice`]. + pub fn clone_reader(&self) -> UserSliceReader { + UserSliceReader { + ptr: self.ptr, + length: self.length, + } + } + + /// Returns the number of bytes left to be read from this reader. + /// + /// Note that even reading less than this number of bytes may fail. + pub fn len(&self) -> usize { + self.length + } + + /// Returns `true` if no data is available in the io buffer. + pub fn is_empty(&self) -> bool { + self.length == 0 + } + + /// Reads raw data from the user slice into a kernel buffer. + /// + /// For a version that uses `&mut [u8]`, please see [`UserSliceReader::read_slice`]. + /// + /// Fails with `EFAULT` if the read happens on a bad address, or if the read goes out of bounds + /// of this [`UserSliceReader`]. This call may modify `out` even if it returns an error. + /// + /// # Guarantees + /// + /// After a successful call to this method, all bytes in `out` are initialized. + pub fn read_raw(&mut self, out: &mut [MaybeUninit]) -> Result { + let len = out.len(); + let out_ptr = out.as_mut_ptr().cast::(); + if len > self.length { + return Err(EFAULT); + } + let Ok(len_ulong) = c_ulong::try_from(len) else { + return Err(EFAULT); + }; + // SAFETY: `out_ptr` points into a mutable slice of length `len_ulong`, so we may write + // that many bytes to it. + let res = + unsafe { bindings::copy_from_user(out_ptr, self.ptr as *const c_void, len_ulong) }; + if res != 0 { + return Err(EFAULT); + } + self.ptr = self.ptr.wrapping_add(len); + self.length -= len; + Ok(()) + } + + /// Reads raw data from the user slice into a kernel buffer. + /// + /// Fails with `EFAULT` if the read happens on a bad address, or if the read goes out of bounds + /// of this [`UserSliceReader`]. This call may modify `out` even if it returns an error. + pub fn read_slice(&mut self, out: &mut [u8]) -> Result { + // SAFETY: The types are compatible and `read_raw` doesn't write uninitialized bytes to + // `out`. + let out = unsafe { &mut *(out as *mut [u8] as *mut [MaybeUninit]) }; + self.read_raw(out) + } + + /// Reads the entirety of the user slice, appending it to the end of the provided buffer. + /// + /// Fails with `EFAULT` if the read happens on a bad address. + pub fn read_all(mut self, buf: &mut Vec, flags: Flags) -> Result { + let len = self.length; + buf.reserve(len, flags)?; + + // The call to `try_reserve` was successful, so the spare capacity is at least `len` bytes + // long. + self.read_raw(&mut buf.spare_capacity_mut()[..len])?; + + // SAFETY: Since the call to `read_raw` was successful, so the next `len` bytes of the + // vector have been initialized. + unsafe { buf.set_len(buf.len() + len) }; + Ok(()) + } +} + +/// A writer for [`UserSlice`]. +/// +/// Used to incrementally write into the user slice. +pub struct UserSliceWriter { + ptr: UserPtr, + length: usize, +} + +impl UserSliceWriter { + /// Returns the amount of space remaining in this buffer. + /// + /// Note that even writing less than this number of bytes may fail. + pub fn len(&self) -> usize { + self.length + } + + /// Returns `true` if no more data can be written to this buffer. + pub fn is_empty(&self) -> bool { + self.length == 0 + } + + /// Writes raw data to this user pointer from a kernel buffer. + /// + /// Fails with `EFAULT` if the write happens on a bad address, or if the write goes out of bounds + /// of this [`UserSliceWriter`]. This call may modify the associated userspace slice even if it + /// returns an error. + pub fn write_slice(&mut self, data: &[u8]) -> Result { + let len = data.len(); + let data_ptr = data.as_ptr().cast::(); + if len > self.length { + return Err(EFAULT); + } + let Ok(len_ulong) = c_ulong::try_from(len) else { + return Err(EFAULT); + }; + // SAFETY: `data_ptr` points into an immutable slice of length `len_ulong`, so we may read + // that many bytes from it. + let res = unsafe { bindings::copy_to_user(self.ptr as *mut c_void, data_ptr, len_ulong) }; + if res != 0 { + return Err(EFAULT); + } + self.ptr = self.ptr.wrapping_add(len); + self.length -= len; + Ok(()) + } +} From patchwork Thu Apr 18 08:59:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alice Ryhl X-Patchwork-Id: 13634364 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F3D4C04FF8 for ; Thu, 18 Apr 2024 08:59:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B25396B0093; Thu, 18 Apr 2024 04:59:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AADCB6B0095; Thu, 18 Apr 2024 04:59:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 950586B0096; Thu, 18 Apr 2024 04:59:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6FB436B0093 for ; Thu, 18 Apr 2024 04:59:39 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 1CAA4A13C6 for ; Thu, 18 Apr 2024 08:59:39 +0000 (UTC) X-FDA: 82022054478.08.5DDA930 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf26.hostedemail.com (Postfix) with ESMTP id 221EA140020 for ; Thu, 18 Apr 2024 08:59:36 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=euQzxGNa; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 3-OAgZgkKCHUTebVXkraeZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--aliceryhl.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3-OAgZgkKCHUTebVXkraeZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--aliceryhl.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713430777; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MXu0ivdSThjPNuQSvbrqR8EgqUkoc/lgNPsqzGk09fE=; b=SRIL6aV9l/mLV0BKXO3unQCj5Oo0gFs2oDltIE8B3kob6ewwlHktJ1VFxxsj3Qq7LwCrxK m0AzNH8U3Rw6s/Sxnh53+/ykgu6cN262LI8E87pitVtY/Plu0x8g4sxny7EntCWQzdHJXQ ZpdgSxopSwlK2sCbC1RHevY/GWFoaWM= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=euQzxGNa; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 3-OAgZgkKCHUTebVXkraeZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--aliceryhl.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3-OAgZgkKCHUTebVXkraeZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--aliceryhl.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713430777; a=rsa-sha256; cv=none; b=nj6p1mF9JMqTG6e0/TIvD6MwT/RHhVyZOS8sWqKn2gYCd/lY/y7XCT1M8c8RB+9VaiC89f UmQ/44Ctto9pHKXbeaXBNwfgVQk2pkAI1+Ps5yRMLi0AbIs7ibYNAF5Vzf5+zPigzdKUE0 L4Jpwh4eJ6df8gu1Q5aoEn7jk5FBnxo= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-de3d9eacb57so1306493276.1 for ; Thu, 18 Apr 2024 01:59:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713430776; x=1714035576; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MXu0ivdSThjPNuQSvbrqR8EgqUkoc/lgNPsqzGk09fE=; b=euQzxGNaPCgImCW98qvOYM2VLPimXJYzbUJL0YFOET88K7eG3pJE3f3ZV8XNPQo4yD caifBy/U9XLQoPFbg0xnRBZ0Um8619BEVjgdYHWPyP9VNW6NMYmzgj2RdfN01k11JQ4F SFgshOni/f/GODZ6kSuKalYGb5ld56UziDOyFojfU3KYdU5dTfhiV8j7l4iJpJBGbU/0 pymCkkZYroGmK4sr0rtpmLMgvQRsC1xohPIymNGHwpWICC/rLBiK0ioaRv7ZdblnQl5u 9KO9RBF2Z/PXvRfGm4UuA5r3nkiqUrrR4R1SybnxVIVR8EvQk8GAvCpNnlqYe22lN6bD 9OaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713430776; x=1714035576; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MXu0ivdSThjPNuQSvbrqR8EgqUkoc/lgNPsqzGk09fE=; b=TJ42GIuZkYVT0v68WzJkNNOZUai9DHyOMwE1JS5YK+aiqdd2+ewy58zcJ6a4i8VfZD Ii8dmeQq1ERlqDVqbgwRhAtjRhPCrRNqkr1rIEjx/2glPwLZRpT7VwiDE0g9FKqt+bXY L5wEQ7ah7OpmSrA8DqqI2BoduRi7RCmpc5MEmBYjZALwyUzoVZAjW69u78U5zK0Ey8Ik Rn6TcF77QOIbZtbwNFTXhRTqb5tcTvWGDsG54CN/TpGGtgzg7/8tJddJLE4UfWeW3cdl KiFbca6ljUzSHa7lphgELfk9x2dMySfQOaXB8LEE9LdzoOxcf1XOnP+cnDMY0x1JlXX1 vsxA== X-Forwarded-Encrypted: i=1; AJvYcCXFQDNle3zIbF48F0MpIi9XQ+J74qAQBvp90e74XaQjtgLISGnmgeXy/H4bPw8bntrnL5FZnBxF0NZTiTHzI2fo8bY= X-Gm-Message-State: AOJu0YyOmtRJXJha2zzWxjhub2HsxzEnvJJX2fGWzIWC4xPSbOw4g9qL 9yObjm537TPSCyOpbH+uUA6dGYQ8EcN5zVoA0nJZOlASGcZGv42uY2BFqQBKd9fGyvPlos6u0E+ N95scuSKC6rgZxQ== X-Google-Smtp-Source: AGHT+IGse7NZQ2hEZ3xo8zQMgk9GxHVA8CNqzQiXXjRdYigAwKQOE+Tfaeop+mtM1YJnE9wBINVVlzrjz7ZKuzM= X-Received: from aliceryhl2.c.googlers.com ([fda3:e722:ac3:cc00:68:949d:c0a8:572]) (user=aliceryhl job=sendgmr) by 2002:a05:6902:1141:b0:dc6:5396:c0d4 with SMTP id p1-20020a056902114100b00dc65396c0d4mr546819ybu.1.1713430776218; Thu, 18 Apr 2024 01:59:36 -0700 (PDT) Date: Thu, 18 Apr 2024 08:59:18 +0000 In-Reply-To: <20240418-alice-mm-v6-0-cb8f3e5d688f@google.com> Mime-Version: 1.0 References: <20240418-alice-mm-v6-0-cb8f3e5d688f@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=5122; i=aliceryhl@google.com; h=from:subject:message-id; bh=x1NaT6C4DcUL7cPQ6UUi0mTx75zJnB9XyZJjsIRI3Ns=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBmIODuRjVdfd6NY/1+Lcn75vkKinRf3CAdt5vul 8uvXQyrNxSJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZiDg7gAKCRAEWL7uWMY5 RsQMD/9U/ASnGmCDYMWUzDaCj6mk7qk2+xp5uBW1ucNBWrMyAEFMHPc5z03wfq4unBKyUtxuda3 cVebYAtbaZbp63wjQikTVnWr0A/vRMMbHK4rrmsUpppJWFebPLU5dwepofmDp1dUe4AsB7b4v44 Jn+7LT1sf4rD/QVa/iBoA1/HkJF3dbe5DL9LlCgW/jjTpTjCZF6vyyTPBonT3KKig+kGVf/mDXW SclM6B4dwaEHWE0iDwimbzGZ8olTZkYgOFHe27YIGru8D0vpepxeUE/iWloev4BSDHBgwwgmQKq LBBbgpJsR9jRU8oDstOfLK8AsYQxwhrvsJ0jeTIETH+3NwaB4KRpj1oMXBoKKLWZub9Kp9t+A5M X1bYRKt82+DXipxSeLK3h6XfUzxER09cfcT8U+Q2yixZ9gATa3U7PDUwp6JuKVlkr6Z1CMWH2Fv iy2Rj3azGJ9KQTkRTM2WpdQzShrIQPxjmkLdeub4Tr8oN+5W0UT/ywuAUF9UDsEn/+0N5N71PpP m+0Dpv35ERh8DmxzzX4Vsoh/qEjJlG2mAloMcUfjjyMt7ooKb3Id971TlasNffkGf6yIaIsBpXl 79SLRmMpdwi7FC/KENaINLCDuv9cayQySDvWxrjE1LMAvl61aKImCYsNgbngS3vnO/CiR+/W2Hv PVWyg3BJIZCYi/A== X-Mailer: b4 0.13-dev-26615 Message-ID: <20240418-alice-mm-v6-2-cb8f3e5d688f@google.com> Subject: [PATCH v6 2/4] uaccess: always export _copy_[from|to]_user with CONFIG_RUST From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Al Viro , Andrew Morton , Kees Cook Cc: Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , " =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= " , Benno Lossin , Andreas Hindborg , Greg Kroah-Hartman , " =?utf-8?q?Arve_Hj=C3=B8?= =?utf-8?q?nnev=C3=A5g?= " , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , Trevor Gross , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Christian Brauner X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 221EA140020 X-Stat-Signature: xe3kdg5ot9geeme5xc7ct7mj58btio8i X-HE-Tag: 1713430776-955727 X-HE-Meta: U2FsdGVkX1/5xjV0YwD8Mv95aLhzWNWD10Qi2KgGlt7YdMgytWHkO1p4tTYX26iwNgA1mwMgsO6kniXuch1+pJMEjjHatQKhyJ8pr9qiaCFv+mJk/Cqqwf+ZgqilXwagERuE/eEGyqGIqM2uVaASHNZ27OgXtHMj2Gsl9wKMNrog7kXTfaKXEeDpqXSVoXn1qEYtMup4W5Uq1ji/+EdQGWUM+K/sDfYhRikA2Wmh6BjgNGS5Zm7U2h+lTVz1jj79m57GbcD6ar41A49pNXi8M1s3IdeOKY9d+tfte6WogCTD4e4nQN8q/tdU3RkotxC8J8VBhs4VDvdsljCXZkW5VOX4gxfBt91+Es+n3ueWkeWO5xfw0+0XmE1/KkDhXIVzPSuQswtL3jNgORFCm+IROCNP6eHyo/KHef3m2wW3aAf4jBBu7ecFM/Dpt9kEFZL/dRmv8kyKm4qSQen/g4MQ3z7MDxkcQqLVaveT9om9OX6BpTkqGqD/Dumniqy9js4Z+KuWXYFnGAZKzdT7qMKxEdQ6EcadVU4XvO1VWad+RsB3lHijIxrrgTb82WykRHk9SBxMOUrw2bD8fZUcb1tZnicmbIk3KthEtUVhbuO2ip0cY5j3qDkXEpoYoQ+SXt0cW/FvdVcDDp4R28CkZ8gHn1vno3RS746YD5WK6UZiDDO4rBubmq2sSBltJdpEjlsdnqLirNSkQC6vFPYzWYMzIfHRBmo1iB5iBtjDAdBUfK7YDspeTF7NuuD414xbGI+TUa0EIDONE2qL+Ay8FOe1ZFZomN1QAUVcEzD8rvwx/MJAJbmpRaf66O6TpBDcOGhkQJXR9JaboTD+bVHPRmmuWBotiIIzmmQcVNwQNNw8nT6TeSSJAww3ns4xB5DzYGc2w8YTkML2xgCp2LGOsZZjEQYqoFB0U4vCUxgUzpQvjcPHmjhMtn4JgY1HZuBrz+s1y+QnaIhfMLipuqVZRwD zb1ZJo4e 4TbACjLK5U2xfXB/acCPEgRzXH0e6P+rukbbrsSZpskqirwvCfZ9gnAf4LIT/XIEoFv/3gLUWEagHrp84xmfbFr0L1Dl7Vo+osVKuEhsGsUD4DuqK+t/qIFQtkXTaikl2h4eC+DigY/w1zKBxbKoBq1SLuqDxlAHdbI5zaO6gjqgE5WWd4rOGi3/m9b3as1CaYqUywEDYnsFya2UsNCxpNoD+mmTbUdx87cuSw4o/owF3a81Niaz3ivzRqBIIpFnKPHMTmKmDE62Fns5/TOGWWFzd/tnKeqNK/4Jzel1Lp70CoOkpKYM88Z49zzni0VRWpuZdHhA6kw70yI9aqUsmoLrtFBB4MHV0Nc+DPCLX9ui8BFV612PYV9cPODgJoU6e6Q0VdAjg3Rwt0aSwRVzGJ9ooMBHvSBcggWvXaIFywKpyHgNFKU5bi5YQtHsmDb7fnw/MfQNJjX6pY6DLJnhl7VFGMm3+ChMlyvNX67eTNuUDsgpzf6DinKa2BgldZX65if44vK2p6r/ZpFkudVLrPC2OprHLzeAQXbUapPR8LvLXkanp8ahWXZEb+GM/Xk6OE/F7B62kH4d+TBIVVI61sthYX4h+xOFsBlJFv0rENo9SpXRhPhqL1c/ty1ZC6+wpINgKQWMXbTBndas4FVcd2TAO/hKLfINB/xcQO6lkHhmgKZS+NvVONeq23Zp9AyGaY3uLP5XO/pveLDkblBoM0KhEKjw9btEoKp4Ca8MC6muyt/v4EPSHkCPGLRbblGECYgU5WNosF6ano0XqC8sZlDR3eOTlvl/ruELZ90cIVA8DlxWfDsB3Knoq+bFeN+tXlY7paR4joaITpgk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Arnd Bergmann Rust code needs to be able to access _copy_from_user and _copy_to_user so that it can skip the check_copy_size check in cases where the length is known at compile-time, mirroring the logic for when C code will skip check_copy_size. To do this, we ensure that exported versions of these methods are available when CONFIG_RUST is enabled. Alice has verified that this patch passes the CONFIG_TEST_USER_COPY test on x86 using the Android cuttlefish emulator. Signed-off-by: Arnd Bergmann Tested-by: Alice Ryhl Reviewed-by: Boqun Feng Reviewed-by: Kees Cook Signed-off-by: Alice Ryhl --- include/linux/uaccess.h | 38 ++++++++++++++++++++++++-------------- lib/usercopy.c | 30 ++++-------------------------- 2 files changed, 28 insertions(+), 40 deletions(-) diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 3064314f4832..2ebfce98b5cc 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include @@ -138,13 +139,18 @@ __copy_to_user(void __user *to, const void *from, unsigned long n) return raw_copy_to_user(to, from, n); } -#ifdef INLINE_COPY_FROM_USER static inline __must_check unsigned long -_copy_from_user(void *to, const void __user *from, unsigned long n) +_inline_copy_from_user(void *to, const void __user *from, unsigned long n) { unsigned long res = n; might_fault(); if (!should_fail_usercopy() && likely(access_ok(from, n))) { + /* + * Ensure that bad access_ok() speculation will not + * lead to nasty side effects *after* the copy is + * finished: + */ + barrier_nospec(); instrument_copy_from_user_before(to, from, n); res = raw_copy_from_user(to, from, n); instrument_copy_from_user_after(to, from, n, res); @@ -153,14 +159,11 @@ _copy_from_user(void *to, const void __user *from, unsigned long n) memset(to + (n - res), 0, res); return res; } -#else extern __must_check unsigned long _copy_from_user(void *, const void __user *, unsigned long); -#endif -#ifdef INLINE_COPY_TO_USER static inline __must_check unsigned long -_copy_to_user(void __user *to, const void *from, unsigned long n) +_inline_copy_to_user(void __user *to, const void *from, unsigned long n) { might_fault(); if (should_fail_usercopy()) @@ -171,25 +174,32 @@ _copy_to_user(void __user *to, const void *from, unsigned long n) } return n; } -#else extern __must_check unsigned long _copy_to_user(void __user *, const void *, unsigned long); -#endif static __always_inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n) { - if (check_copy_size(to, n, false)) - n = _copy_from_user(to, from, n); - return n; + if (!check_copy_size(to, n, false)) + return n; +#ifdef INLINE_COPY_FROM_USER + return _inline_copy_from_user(to, from, n); +#else + return _copy_from_user(to, from, n); +#endif } static __always_inline unsigned long __must_check copy_to_user(void __user *to, const void *from, unsigned long n) { - if (check_copy_size(from, n, true)) - n = _copy_to_user(to, from, n); - return n; + if (!check_copy_size(from, n, true)) + return n; + +#ifdef INLINE_COPY_TO_USER + return _inline_copy_to_user(to, from, n); +#else + return _copy_to_user(to, from, n); +#endif } #ifndef copy_mc_to_kernel diff --git a/lib/usercopy.c b/lib/usercopy.c index d29fe29c6849..de7f30618293 100644 --- a/lib/usercopy.c +++ b/lib/usercopy.c @@ -7,40 +7,18 @@ /* out-of-line parts */ -#ifndef INLINE_COPY_FROM_USER +#if !defined(INLINE_COPY_FROM_USER) || defined(CONFIG_RUST) unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n) { - unsigned long res = n; - might_fault(); - if (!should_fail_usercopy() && likely(access_ok(from, n))) { - /* - * Ensure that bad access_ok() speculation will not - * lead to nasty side effects *after* the copy is - * finished: - */ - barrier_nospec(); - instrument_copy_from_user_before(to, from, n); - res = raw_copy_from_user(to, from, n); - instrument_copy_from_user_after(to, from, n, res); - } - if (unlikely(res)) - memset(to + (n - res), 0, res); - return res; + return _inline_copy_from_user(to, from, n); } EXPORT_SYMBOL(_copy_from_user); #endif -#ifndef INLINE_COPY_TO_USER +#if !defined(INLINE_COPY_TO_USER) || defined(CONFIG_RUST) unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n) { - might_fault(); - if (should_fail_usercopy()) - return n; - if (likely(access_ok(to, n))) { - instrument_copy_to_user(to, from, n); - n = raw_copy_to_user(to, from, n); - } - return n; + return _inline_copy_to_user(to, from, n); } EXPORT_SYMBOL(_copy_to_user); #endif From patchwork Thu Apr 18 08:59:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alice Ryhl X-Patchwork-Id: 13634365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2BD1C4345F for ; Thu, 18 Apr 2024 08:59:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 643056B0096; Thu, 18 Apr 2024 04:59:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5CC1C6B0098; Thu, 18 Apr 2024 04:59:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D0CC6B0099; Thu, 18 Apr 2024 04:59:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 1B2FA6B0096 for ; Thu, 18 Apr 2024 04:59:43 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id C80A516109B for ; Thu, 18 Apr 2024 08:59:42 +0000 (UTC) X-FDA: 82022054604.25.10E9294 Received: from mail-lf1-f74.google.com (mail-lf1-f74.google.com [209.85.167.74]) by imf27.hostedemail.com (Postfix) with ESMTP id BECE64000A for ; Thu, 18 Apr 2024 08:59:40 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="I9K3XJG/"; spf=pass (imf27.hostedemail.com: domain of 3--AgZgkKCHgWheYanudhckkcha.Ykihejqt-iigrWYg.knc@flex--aliceryhl.bounces.google.com designates 209.85.167.74 as permitted sender) smtp.mailfrom=3--AgZgkKCHgWheYanudhckkcha.Ykihejqt-iigrWYg.knc@flex--aliceryhl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713430780; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HgZdNi+a47HOfHsTxDpJlGILwCywia5g2y3UnrPe+eU=; b=pAJwx2MmIw6Jo1GhIe6v5g3Wf9Sg33pm29EMot+Acs7EHQLeANjuPVn+f/5v3Ng+3DQt+P lCQTQ6/i1d3M7EoM9KXhCwmNJ9W9IvjFViRIjGbly1F7qkLwHj8AwF3wRwx4xKy0VxRW+A LwI8bQRB6WkyGdSfsb09UBz21BONQpw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713430781; a=rsa-sha256; cv=none; b=iIjv5hbyK7IvRgbBJrrwG5ndtLVYeJzuu4W2haK5sQVMaR8QYMckrEA85d2Jfchw9br0eR kbNUixpPgh/WasPku5qSp2ekZ+vCWt4CNzvneN9rjCDQbPffozBZcOCdTSQNzbact9Vgga NzfS6JQ90JnopCjhQgdzTII8FZSJCvE= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="I9K3XJG/"; spf=pass (imf27.hostedemail.com: domain of 3--AgZgkKCHgWheYanudhckkcha.Ykihejqt-iigrWYg.knc@flex--aliceryhl.bounces.google.com designates 209.85.167.74 as permitted sender) smtp.mailfrom=3--AgZgkKCHgWheYanudhckkcha.Ykihejqt-iigrWYg.knc@flex--aliceryhl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-lf1-f74.google.com with SMTP id 2adb3069b0e04-5195b8410d7so417115e87.2 for ; Thu, 18 Apr 2024 01:59:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713430779; x=1714035579; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HgZdNi+a47HOfHsTxDpJlGILwCywia5g2y3UnrPe+eU=; b=I9K3XJG/MAxtIrvVwBqlulygzGUo61+jVbcncKREy4DORYRHKJd8b/sm4XkMKL+lZQ us1ZZjtbQwYaPvo+ETwruxS6GwJ8FTrnNvAwDOrEE/EXwJ35GeNnZcTWK6z0at1cTq8v qli6lXliQQWjTFeCGiPZaHzBmrCvaDpIwWrWMpj7l5speM8UkDer3p7Nr6Ucg+He9rS9 vNHOsm5qd9aENg8LY0XGIQ9Q7PVpmUPBxZ1adsZYOxYWuu7yWPe44dpkHzDgMDKMAmmX JptaG/osFUHFJF51mA+aoeqW5A/LRs2ltd5tgHxs+YetpWTJpxKbFHOe5Mz11EaLP7qs s+MA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713430779; x=1714035579; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HgZdNi+a47HOfHsTxDpJlGILwCywia5g2y3UnrPe+eU=; b=HWEJ10Rxe7nr7bGwSusOwIxIRpu4Wq0uGCPBYvkoCbLRTNRT7S4os5JIDvtkPzdyDY GaAQhbVRgq06+msXI5Vg7zkNNYmh4g/WefVdC5Ub6MX0UqF8wLFuGoQO50OblDAag2br oKrObJxoodeAbc3cgn2c3Sd+SmS0bxg5AOHkPIv6oGsqGe9p3/+ktQ8xJS4WRwnq5hIj AlYgqzDl5Cib1K8aGeFZ0Pmx9blKRCam1LLIMx7wWvSg+1grqLIy1D6y1QedmQBvvSwG HGEv96ksqAAwXmLCZYxFxYSwzVvzeKQ/AzJ38gL9re59IDmcRqbTbF4fWG5ehGW+A+KM dhlQ== X-Forwarded-Encrypted: i=1; AJvYcCUra2x0AXlDZ97ChjyYd9vrGDGY9f69QlMQi+8xurx62GyRfZG+nQjQ51Ry0FRva6WySJ6xcdjpzkdqjWzMK8yf9rE= X-Gm-Message-State: AOJu0Yw4xbVgZZBOmm3s4moGw6kqUpC0Ocbwoj5mOo9n4uBEhbWtrtH2 LVDYK20QobqxMq1eT3n+qa0TUgioVc7PKQQ6R+x6bZ/EAkU2GUWlzEDnDNXy0l+/ZZliN09erc5 cbRryID4jn+VQtw== X-Google-Smtp-Source: AGHT+IH6ScDtDrgw+IgQwpvtBZKJlRmNiyvFjb4gONlBnfX6IBw590w36dZkN+MXAX42+StYgo9t80UV0fKV/JE= X-Received: from aliceryhl2.c.googlers.com ([fda3:e722:ac3:cc00:68:949d:c0a8:572]) (user=aliceryhl job=sendgmr) by 2002:a05:6512:512:b0:518:bd37:87f2 with SMTP id o18-20020a056512051200b00518bd3787f2mr1938lfb.4.1713430779142; Thu, 18 Apr 2024 01:59:39 -0700 (PDT) Date: Thu, 18 Apr 2024 08:59:19 +0000 In-Reply-To: <20240418-alice-mm-v6-0-cb8f3e5d688f@google.com> Mime-Version: 1.0 References: <20240418-alice-mm-v6-0-cb8f3e5d688f@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=8978; i=aliceryhl@google.com; h=from:subject:message-id; bh=nI8Gb6Ut6mQQFRyrF/IoMMXllmm0f2L2teM9nYy0pQk=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBmIODvS6oNsms6X0YHcmuKs7EizEym1tAEymjog Q30/NA20CSJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZiDg7wAKCRAEWL7uWMY5 RncsEACrSjtg5WqYkwrxGPaVxe3pCJO44xelyaMAJzezxUveUUN5hRIaFILfZELJj21aJsGBbVu l5N9yhyek7O3RPzQFGOzyapoR0QkuNWswj8JdEFq34/xHTp02Dy0WRmgxYUeIyQo+CkH+epFXbA yLxSgdLPnaqwYLFkJL6W5fwIbas8WiKJyS7u/Udsd+14bj2lHYAYCKosI5sSyJbIaRX4ODOrp9F BNHlogeI0A9yixFtpGGvN8z+kLxbDNafpriT58EFMWb5WmrXUjW6w+nxdvj3byMBWWqo3FxHHmt O80OQrBHRpoM5az1Xz7ctWqayHAL1Q8VsbQCu87jrGxP8BFwhvnYpHS1G9LtlkuBXeKLEKmlx+U D+fcJj9z/3j1P8biVsbOFvv/64U8re5ZdjqRyOqBygddDFvMEVkY9jW7n7Z3NtPyMuw8vQn8Prz DEAp2pVwpkf7Luo7EZhM431ixMB1xM1F0M9590356xT26n8/qycKA8X6ApHtnC30MoQLF8PXEte 4eMs4UvJmDodAIddQUXiz+Kj0FhmLVKyq85a2LEXuJuocTEzIoW7G1ytgOSkfmWFxmzjfa241x8 MrruNrGto/mokKkTJkN9fr0Yr5NZeVs2OmcAPCWihtjiUChNskjqHNalBFknaZUrbLknfBkVYee XKo2SmbVJZgp8vg== X-Mailer: b4 0.13-dev-26615 Message-ID: <20240418-alice-mm-v6-3-cb8f3e5d688f@google.com> Subject: [PATCH v6 3/4] rust: uaccess: add typed accessors for userspace pointers From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Al Viro , Andrew Morton , Kees Cook Cc: Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , " =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= " , Benno Lossin , Andreas Hindborg , Greg Kroah-Hartman , " =?utf-8?q?Arve_Hj=C3=B8?= =?utf-8?q?nnev=C3=A5g?= " , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , Trevor Gross , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Christian Brauner X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: BECE64000A X-Stat-Signature: ja7jdi1i7c5c1915m83q7b8xd1yhp8ap X-Rspam-User: X-HE-Tag: 1713430780-167649 X-HE-Meta: U2FsdGVkX1/1OLPAM7qSBv3RJ5QVdOTfKFmEeMzbJ9ihpsxGZFU3iaChGLXHRmTjU/U/Pwss7+ayxKI2OGO6XEE8jGZhHEQGL8vS+JK33rVvbApgYTtG1Xw9lRmjyh0yvEU8xf0BkShzyjoiEyLAilyD50U6Qxf2Yd7HyW/Z6C4Opo/4sm/AjTYZmT5xep+hjravIrHTzKyibLnBNSvo/5MRQxOA0cUFPp0tVok6N8hLJiOVTPp9NqVOZVD8KkVmtFlaD5mbf9fxm7t0vRlY0NO9mHmllzYMJ6mZacLvcnTEzozq3TP/rYV/lIuOpQ49pbP4fSz0u3qnrGtHybmChvzsMh9Kf9dQRSJmeUjgxinLC7hzAJxZkP4q4dL/YiKiVL75QG/1LQq/gbU+DdzIDiCNtpCeh6ub2/TUWTJgX4qAMVW4VfZGR8Rm8oMUqExFyDpP3DmmtwSZkYG+okkQ/nN6KAtknvxrV5gZsV68SgO8I5++RhlxI1lYJJYrbZj+85As8h1QuibxBrTj0Cb668kQAv6XppaNOsJrLOwQVvwHiaxsXsfr93In4rTxAUWTq7ixY78S/CzWlX8PgsM9Jl1Jp/v/F5j8iRUGvfG6o53cP5Z0n/pLt1xT3mQ91pz0FVFv46a3aCMCMW/D+tQPx8ze5JDgB1YdoxeAtEJooeB/aaXEQgTvMpmWLTN0DBIciJPylUIL4rOpZyBwsUy1UPWFLtvOyi8TBvqyKZcrDTwhCVgkfzcb50sc9ick5/Jpl/5Etp88IUPNm6/nN+eOvPxUt8ANyM0UH3e5TYztWgG52xuhQn4qk5o+lvPt8P7F7islZaHbeWz/1c8xJZ4NZv+EWM9+RCXso8Lh98ENHeg8bCE+1+13qz+kqqD+VBaLVP3TEXD5WAe7VC1ZYqgppD5EDkNYbsnOAtVWyY7FXdsWTVrjLD1aVNaLRKfLWqNgtsF1VxP1pcKjpjXqM6M kUDLDgcw 8cDudG5zyOeTNwT68byMOYA7zdClgmNx8P4uZ2MlevKgWQbRfy19StRZqoshjrkxWRU8RzHsoo5JOhh49cqfnjm8bhF5JYg5BBkhlGLF379xbWtvnPdGzbA0+vWQyoSr+q2WNh9wnMC7dnKC+xnLisN5nOxI22k5unkVUstn4x9Vp+MxZqSvGEMDq9wclbbGmdYkMWsc1j09lX2n7ninpjSXTnEeSFumfJsSYe/CtR68038B3f5xCUGzXUPZ2MqJ9nbfD7ZVnmLsa2yPFcyO0Gg1xQ9AZjO259p0All4MUnfoLteUT5kc8zdqLq0reGjoR0uH284BAuaLJ14FO1INsPZDXsbr4qJPTwSBzCmmzHaLEn+dm1EUyaJZGMigW4HxkhKg/b+DP1Y1jtIgBTaLIEtPa7rAXLHWbbGJZQlYHJZgfMrCD+e/tVmIggQCrS/CI11l0Y0lVyDVCwBFz4PJIgIY8aWpEJ+bvyDj+ZlialIhUM6aHIG5UEnn6yo63FpXWvATWW0/ofj61Azm+LlJ07j6SW9/c4grV/f+mDTzZfMJFYWmm0IfU14O43KOgiHsmRHlMhagLCcDMYySkJHCr0uxt8B/zH1LaV+FVseV7XGULoXcfSp9jdk9nqQMGKnIy/2rCtC2vsdXmul/Gl9frw0DWc5tSwXOGvyRTXiHJxCYCZ41YMDbrLal9k2X20wxerFFDPfv3DXFr57nggJyycLfsg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add safe methods for reading and writing Rust values to and from userspace pointers. The C methods for copying to/from userspace use a function called `check_object_size` to verify that the kernel pointer is not dangling. However, this check is skipped when the length is a compile-time constant, with the assumption that such cases trivially have a correct kernel pointer. In this patch, we apply the same optimization to the typed accessors. For both methods, the size of the operation is known at compile time to be size_of of the type being read or written. Since the C side doesn't provide a variant that skips only this check, we create custom helpers for this purpose. The majority of reads and writes to userspace pointers in the Rust Binder driver uses these accessor methods. Benchmarking has found that skipping the `check_object_size` check makes a big difference for the cases being skipped here. (And that the check doesn't make a difference for the cases that use the raw read/write methods.) This code is based on something that was originally written by Wedson on the old rust branch. It was modified by Alice to skip the `check_object_size` check, and to update various comments, including the notes about kernel pointers in `WritableToBytes`. Co-developed-by: Wedson Almeida Filho Signed-off-by: Wedson Almeida Filho Reviewed-by: Benno Lossin Reviewed-by: Boqun Feng Reviewed-by: Trevor Gross Signed-off-by: Alice Ryhl Reviewed-by: Gary Guo --- rust/kernel/types.rs | 64 ++++++++++++++++++++++++++++++++++++++++ rust/kernel/uaccess.rs | 79 ++++++++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 141 insertions(+), 2 deletions(-) diff --git a/rust/kernel/types.rs b/rust/kernel/types.rs index 8fad61268465..9c57c6c75553 100644 --- a/rust/kernel/types.rs +++ b/rust/kernel/types.rs @@ -409,3 +409,67 @@ pub enum Either { /// Constructs an instance of [`Either`] containing a value of type `R`. Right(R), } + +/// Types for which any bit pattern is valid. +/// +/// Not all types are valid for all values. For example, a `bool` must be either zero or one, so +/// reading arbitrary bytes into something that contains a `bool` is not okay. +/// +/// It's okay for the type to have padding, as initializing those bytes has no effect. +/// +/// # Safety +/// +/// All bit-patterns must be valid for this type. This type must not have interior mutability. +pub unsafe trait FromBytes {} + +// SAFETY: All bit patterns are acceptable values of the types below. +unsafe impl FromBytes for u8 {} +unsafe impl FromBytes for u16 {} +unsafe impl FromBytes for u32 {} +unsafe impl FromBytes for u64 {} +unsafe impl FromBytes for usize {} +unsafe impl FromBytes for i8 {} +unsafe impl FromBytes for i16 {} +unsafe impl FromBytes for i32 {} +unsafe impl FromBytes for i64 {} +unsafe impl FromBytes for isize {} +// SAFETY: If all bit patterns are acceptable for individual values in an array, then all bit +// patterns are also acceptable for arrays of that type. +unsafe impl FromBytes for [T] {} +unsafe impl FromBytes for [T; N] {} + +/// Types that can be viewed as an immutable slice of initialized bytes. +/// +/// If a struct implements this trait, then it is okay to copy it byte-for-byte to userspace. This +/// means that it should not have any padding, as padding bytes are uninitialized. Reading +/// uninitialized memory is not just undefined behavior, it may even lead to leaking sensitive +/// information on the stack to userspace. +/// +/// The struct should also not hold kernel pointers, as kernel pointer addresses are also considered +/// sensitive. However, leaking kernel pointers is not considered undefined behavior by Rust, so +/// this is a correctness requirement, but not a safety requirement. +/// +/// # Safety +/// +/// Values of this type may not contain any uninitialized bytes. This type must not have interior +/// mutability. +pub unsafe trait AsBytes {} + +// SAFETY: Instances of the following types have no uninitialized portions. +unsafe impl AsBytes for u8 {} +unsafe impl AsBytes for u16 {} +unsafe impl AsBytes for u32 {} +unsafe impl AsBytes for u64 {} +unsafe impl AsBytes for usize {} +unsafe impl AsBytes for i8 {} +unsafe impl AsBytes for i16 {} +unsafe impl AsBytes for i32 {} +unsafe impl AsBytes for i64 {} +unsafe impl AsBytes for isize {} +unsafe impl AsBytes for bool {} +unsafe impl AsBytes for char {} +unsafe impl AsBytes for str {} +// SAFETY: If individual values in an array have no uninitialized portions, then the array itself +// does not have any uninitialized portions either. +unsafe impl AsBytes for [T] {} +unsafe impl AsBytes for [T; N] {} diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs index ee5623d7b98f..39481e374c40 100644 --- a/rust/kernel/uaccess.rs +++ b/rust/kernel/uaccess.rs @@ -4,10 +4,16 @@ //! //! C header: [`include/linux/uaccess.h`](srctree/include/linux/uaccess.h) -use crate::{alloc::Flags, bindings, error::Result, prelude::*}; +use crate::{ + alloc::Flags, + bindings, + error::Result, + prelude::*, + types::{AsBytes, FromBytes}, +}; use alloc::vec::Vec; use core::ffi::{c_ulong, c_void}; -use core::mem::MaybeUninit; +use core::mem::{size_of, MaybeUninit}; /// The type used for userspace addresses. pub type UserPtr = usize; @@ -247,6 +253,41 @@ pub fn read_slice(&mut self, out: &mut [u8]) -> Result { self.read_raw(out) } + /// Reads a value of the specified type. + /// + /// Fails with `EFAULT` if the read happens on a bad address, or if the read goes out of bounds + /// of this [`UserSliceReader`]. + pub fn read(&mut self) -> Result { + let len = size_of::(); + if len > self.length { + return Err(EFAULT); + } + let Ok(len_ulong) = c_ulong::try_from(len) else { + return Err(EFAULT); + }; + let mut out: MaybeUninit = MaybeUninit::uninit(); + // SAFETY: The local variable `out` is valid for writing `size_of::()` bytes. + // + // By using the _copy_from_user variant, we skip the check_object_size check that verifies + // the kernel pointer. This mirrors the logic on the C side that skips the check when the + // length is a compile-time constant. + let res = unsafe { + bindings::_copy_from_user( + out.as_mut_ptr().cast::(), + self.ptr as *const c_void, + len_ulong, + ) + }; + if res != 0 { + return Err(EFAULT); + } + self.ptr = self.ptr.wrapping_add(len); + self.length -= len; + // SAFETY: The read above has initialized all bytes in `out`, and since `T` implements + // `FromBytes`, any bit-pattern is a valid value for this type. + Ok(unsafe { out.assume_init() }) + } + /// Reads the entirety of the user slice, appending it to the end of the provided buffer. /// /// Fails with `EFAULT` if the read happens on a bad address. @@ -310,4 +351,38 @@ pub fn write_slice(&mut self, data: &[u8]) -> Result { self.length -= len; Ok(()) } + + /// Writes the provided Rust value to this userspace pointer. + /// + /// Fails with `EFAULT` if the write happens on a bad address, or if the write goes out of bounds + /// of this [`UserSliceWriter`]. This call may modify the associated userspace slice even if it + /// returns an error. + pub fn write(&mut self, value: &T) -> Result { + let len = size_of::(); + if len > self.length { + return Err(EFAULT); + } + let Ok(len_ulong) = c_ulong::try_from(len) else { + return Err(EFAULT); + }; + // SAFETY: The reference points to a value of type `T`, so it is valid for reading + // `size_of::()` bytes. + // + // By using the _copy_to_user variant, we skip the check_object_size check that verifies the + // kernel pointer. This mirrors the logic on the C side that skips the check when the length + // is a compile-time constant. + let res = unsafe { + bindings::_copy_to_user( + self.ptr as *mut c_void, + (value as *const T).cast::(), + len_ulong, + ) + }; + if res != 0 { + return Err(EFAULT); + } + self.ptr = self.ptr.wrapping_add(len); + self.length -= len; + Ok(()) + } } From patchwork Thu Apr 18 08:59:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alice Ryhl X-Patchwork-Id: 13634366 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAF65C04FFF for ; Thu, 18 Apr 2024 08:59:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0A2E46B0099; Thu, 18 Apr 2024 04:59:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 02DDA6B009A; Thu, 18 Apr 2024 04:59:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D975D6B009B; Thu, 18 Apr 2024 04:59:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A9A476B0099 for ; Thu, 18 Apr 2024 04:59:45 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 2155A41380 for ; Thu, 18 Apr 2024 08:59:45 +0000 (UTC) X-FDA: 82022054730.10.CC2A6FD Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf24.hostedemail.com (Postfix) with ESMTP id 2DB6E18001A for ; Thu, 18 Apr 2024 08:59:42 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=odvTj+2n; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 3_uAgZgkKCHsZkhbdqxgkfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--aliceryhl.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3_uAgZgkKCHsZkhbdqxgkfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--aliceryhl.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713430783; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2i7cIGrycSQYWPlCCdINwh9/UQnjF+IIqZAJpSGmhgU=; b=LcZuiAfDZICCHWVh16HVGfcvjopEkiCwjFhTgITipFvVYCedrVTwX54NlGs0IitI0orikX RNuryxp+hVdKGMDSgvLQeadw4Nuvgjw/HCyDK+kdaQSCEx0iR2cAxCVppvREhQdTByu16W 85bBOcVspPdEkR3RvT9GQIAUkya9dFw= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=odvTj+2n; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 3_uAgZgkKCHsZkhbdqxgkfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--aliceryhl.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3_uAgZgkKCHsZkhbdqxgkfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--aliceryhl.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713430783; a=rsa-sha256; cv=none; b=t77RSHKMOKgitd9xDtun+oGJBBdcV7OqYZWzbOSu3TFLT0/f0dNuP0IxtwphNQsxeSrZ8a nGDgGUt7nT7ZnmVJQLgKfQZsdQZOs5feCYO0GPSobC6NAWS0lE+5g2XROzy3oM9TuQ6f9X ir+iqq5P856QyaY3DAfhtxxX8iouqZY= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-de46e8d5418so326752276.1 for ; Thu, 18 Apr 2024 01:59:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713430782; x=1714035582; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2i7cIGrycSQYWPlCCdINwh9/UQnjF+IIqZAJpSGmhgU=; b=odvTj+2nNmzzJgtSVDIU8N9cbpEDa4T9qW6+zYOw11DLZLnON0YPmOVIo9/zCVvUEq +ayDx0SN3ZtsKl95DRg+g2nj6aUR+2z5Sd0ffDmFNH2MA6ZIM/sjYKEtqSaWVQZ76lZK 6v9Nv5ITlDIt0Tl0t3agZtnC/q7IByPG8LLaANMg0jHgk4pFjiSNslb331gfxWzavltT oOC9pdLdV+TkzfwmcjtEYBaCKWKac4zjvArkmVsekbl2/+jFcd/mCL5F7u/dWrZRnh1B N7zv0cCZDuBQ5E44PcV5XVRoh0gosPbCccDTS30kyuKjaB+BOlUFl2APdIo/k3wxOqnX XjFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713430782; x=1714035582; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2i7cIGrycSQYWPlCCdINwh9/UQnjF+IIqZAJpSGmhgU=; b=IjEy4LinBGl7JmVfYxWKF1s1lGdNv1qmPHHqrx39gZydfUv+lKnVLVnx//0X5cFQkh 6cEglBxJcxCbRCHmQLC9UtUb/JHQX6KLWMYsUPWbJa9bVQDfm9TTqZ3o1rn+25yxH8Jt JkzE2JkcRXoQdxDCm0ZKxZDlTl+tMnCWwpN/mU3oi6BydMAS3fRMtj+vaz6WzL+2t9cR 9H+APbLU2nMwc6yFAkje+KnZMseHVqcl/4bSV6GAGOpb+UTi8ijOi5L3libE8DEBLUip O4vZ68AsgC3SHsN3VFmFkPSIX7UgtbWKGS+MuiXFf/hAazkXW4YyKq6ruNq+M4z9Hr4D z62g== X-Forwarded-Encrypted: i=1; AJvYcCWD94l21uYvUlPcPS9xik44yqU2AbQF40yPnytmNKKpLMMmkh84JXUvt9xotcr1aj3MVElNJyYncKECKYJrLGvEz78= X-Gm-Message-State: AOJu0YwPeQG8V1sPHFPFwgI6hTMtALpDQhEG39glbwrccZfLqfX79ddX ncE/+DTrzmYNoRQi5KXdhZ5+1MaZOuyUpMSuo7AG/0GwSo6Kj9sGc1OA95w3q/5pNR7wHzq3HO5 psDIYEgBM5RUUMQ== X-Google-Smtp-Source: AGHT+IG+jY4+0ofbPAOxP4aGGV+IDNpzgvLMZWhsS1WIS60BQrHa6xxYtx5GVync8b3M5DHqH+EipzY/b+2qzBU= X-Received: from aliceryhl2.c.googlers.com ([fda3:e722:ac3:cc00:68:949d:c0a8:572]) (user=aliceryhl job=sendgmr) by 2002:a05:6902:18cd:b0:de4:67d9:a2c6 with SMTP id ck13-20020a05690218cd00b00de467d9a2c6mr89467ybb.2.1713430782225; Thu, 18 Apr 2024 01:59:42 -0700 (PDT) Date: Thu, 18 Apr 2024 08:59:20 +0000 In-Reply-To: <20240418-alice-mm-v6-0-cb8f3e5d688f@google.com> Mime-Version: 1.0 References: <20240418-alice-mm-v6-0-cb8f3e5d688f@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=16029; i=aliceryhl@google.com; h=from:subject:message-id; bh=af0xmkxz5WwJ2caWqC4SIR0mhucvRiEoxb05tr7yw14=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBmIODvBTN1EJk4q9WLN6UPNG6sSesov6EsZDtRE MaqZfGehSuJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZiDg7wAKCRAEWL7uWMY5 Rq/SD/9YqfoAjeuS9VklwHUSs1rn5vmzmFezb8JzHZ111jayAEZ1/AuITRgkAPPp+EY/GXWC9sR qSazhLjpMbJlQQVgAskubdzy63Ep6bsr/3Gwb9EnhVo+cZxg3JA7DP9y/dtrBBTcnwuSCxz9Q9N E4wScxVzeklnykkmwNHN4qiZ//mdG6b0FG3phZOIq+WAvWdhHxkQIxdODCWt6qPSvPntIj3+PPM 8fekPfWDKPdRmER+eWXAPQBfsKHdzBrRhj7frTBdaUELnd6evJrnduAtglFnqu2znIaRwomBHsW hMvEDO63+If+0T7pkWfEzrj0i4DPn7n4YaIMQwDYWdoxOWGiS/TUI3sRHFxGSL+7ooBt9SG7BRg j4oYk88EmNiiNR0Jad2E5UJj7lX2yiP/T+JWhFjBEKpN5FMs8Nrt7KfN4x0JWKvE5XpnaQoRYla rcqp0woeAlAzyGbo02t9PIb4TDQUwxTzsGgeMSxVq7izTevsrCCuj4LI6wbHCiSM2KIoSugK42l Pu1NpgPsiw9Ys2RWtLhiZ99b38voiTCrUVSfeKbfDKUdsLsZisaPcs8OTQFOq0FoyOULZCYKDqT I3b8KMNX9i0EfWQFLFCC7OpsoJBPfOH3RBtrOcEEcQ48hlvPvzHQyqAyLw6sm+nme1Ul+CPeo+T 7G1JpJbpOBZnLEw== X-Mailer: b4 0.13-dev-26615 Message-ID: <20240418-alice-mm-v6-4-cb8f3e5d688f@google.com> Subject: [PATCH v6 4/4] rust: add abstraction for `struct page` From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Al Viro , Andrew Morton , Kees Cook Cc: Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , " =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= " , Benno Lossin , Andreas Hindborg , Greg Kroah-Hartman , " =?utf-8?q?Arve_Hj=C3=B8?= =?utf-8?q?nnev=C3=A5g?= " , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , Trevor Gross , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Christian Brauner X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 2DB6E18001A X-Rspam-User: X-Stat-Signature: qk4ap5aya4mjr453oazdpha5qdx5kg4d X-HE-Tag: 1713430782-163175 X-HE-Meta: U2FsdGVkX19X2jN187aSHsXgs9qib92Jwo3m2z76z8ht8mGp/CMv7vla+153Io7BBA5QpMIdk8jkpUdfbqIUYseiFG8J5cqQ3qZKb4fOwZ5N9sKKvLvhw+11P2y+YAsDtmx7/i6/pEWyByVDs8ufJ2TpsIA/rXI6cMrmAjkRaOSacoZccmyxxgtyznBPwj7uuHJc9+M45+NrY07iVOTP1pQ+XSvYGpXju+MBi2qnHQuLotPLcjj2t187SiFjAXoFg5M8wIqXefWSPOK1ZWJzIvrej3VU6Cj9FN7RXpZcWIOs2iZRj9cfdi8yNTPILt8JatT6txO1LxnArfk50jayJtzh+bJoUNGXJdIEbo6IJXqrE4vW0DCkK77s195eZBGz3zF6coI/9i6E3Lmau08agD7h3C7NBZyccbMtL5A5e68bZuy4bUBh1XRNBwRJRPoj7U4zq96ifZDR3aGDQsPCJtqicJp6so8eKfn40CO3sb+M0ySMAiIw7TwnMLPN2/UkuHjJnBOhs18uLDtJeIAm67G0xMYZfk77RfTduQhqo7cRPeLsBQgUTHcqxLbBpqJEus3uy22pkF3X3J+SHuGt1g8meDjbLYwMZidhfDKXSmmQ6uum1fasyjCeR/UyUPeFWv0pcrKENzh4i0oXACPvUSmYXD/ENH4vZhlPV8scT1fBstUhWVFPCBt9rkvx3ZLku9u8HqPK/e2ET7JerttyUMcom6NvPuecgHJN7WcbsGSuD6ooB3JpHrkUaEF+DSgats4JteAGRwCl8mqLHs1mGmtwUwNw7ze9NQx1QrlUFvLeq/ug0lK437iETwB6efhfvlzMR1+c11ld+9aCiQWRpeJePnspMgvTmoNW5Tnhd6R1tWDw06f6clPjdHp1dl7BwOwcdNPWZkOUVCrRof7qNUuRSyPsv3zycGwL2HtUd44eC7ZUjNdc7VL+OoEFq4371WmYzjFj9UmFhmsmp28 T9XBEnqE gpSzlklAT4y1Eqi3jA1WN968T+CdHF0oDDu9i7dKTbUMKQu1SrPZSutjdHgmH7Yqb2h6ygsQ8JktrmEgMkHK5/YE0z7eGbnsiB2tzxihHQpmUHtsj0t1nFbxbbOzUmRp5nh14xF4G8LdehOyKNagQh18rFU0Qqo0E0vaguJLZRiIAnAW9czjc3qvulj/tIMiGBRjhR1pnuVI1ngWp6J22Be2smY86F3uRMnw+wIBZenBp6tK2qxh8EyjRj4ha92K/aAI3ACTnnL2RO1CzzW3estXgB4cllOQR5i5DNiIjiLpBADOEUllzbm7BCFwAVij5fgQ4SJPAi6aTAYziOXTZlQmiFnz93XTvnalb35yyK1BtvSP7l0y++3Yb0CCz1jkVHv9Fr/CgIq8lt3DzRWs+t4BWp2GsR5iWrsMzm3gg83NCCuRMeQ55YTw1Z+R2ko70SvYNLYb4Gkgyfv+BFhAcUPGmpi2+eguZ2Cm4ITcjy5i4SjfczBr6RpHoSOe4+6NBIt43VWEbr4PPlgPhku8LsHhzh+c/L8VJr9P/dq0Y92EU3tsneFumt7z6IO8e5Dev7yJ8C4oXfw+sKLTWkQaWCwByJ5oWbMNCVS4U1OF1xZB6XH5zEpGEdYYAy4Oonij/YiKfzsHq5Nvt1m9vFnncdnddnFbvFddNBXaOY9ZQuNNDbWP2SD6Gfr434O7/vR5m47GbwZKo7WPOPsBhx0DD3duUPailLI7f/cdtcvhVnJjn/IpFBfwa7/q3Aa8aJ3fJ6oaL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Adds a new struct called `Page` that wraps a pointer to `struct page`. This struct is assumed to hold ownership over the page, so that Rust code can allocate and manage pages directly. The page type has various methods for reading and writing into the page. These methods will temporarily map the page to allow the operation. All of these methods use a helper that takes an offset and length, performs bounds checks, and returns a pointer to the given offset in the page. This patch only adds support for pages of order zero, as that is all Rust Binder needs. However, it is written to make it easy to add support for higher-order pages in the future. To do that, you would add a const generic parameter to `Page` that specifies the order. Most of the methods do not need to be adjusted, as the logic for dealing with mapping multiple pages at once can be isolated to just the `with_pointer_into_page` method. Rust Binder needs to manage pages directly as that is how transactions are delivered: Each process has an mmap'd region for incoming transactions. When an incoming transaction arrives, the Binder driver will choose a region in the mmap, allocate and map the relevant pages manually, and copy the incoming transaction directly into the page. This architecture allows the driver to copy transactions directly from the address space of one process to another, without an intermediate copy to a kernel buffer. This code is based on Wedson's page abstractions from the old rust branch, but it has been modified by Alice by removing the incomplete support for higher-order pages, by introducing the `with_*` helpers to consolidate the bounds checking logic into a single place, and various other changes. Co-developed-by: Wedson Almeida Filho Signed-off-by: Wedson Almeida Filho Reviewed-by: Andreas Hindborg Reviewed-by: Trevor Gross Reviewed-by: Benno Lossin Signed-off-by: Alice Ryhl Reviewed-by: Boqun Feng --- rust/bindings/bindings_helper.h | 1 + rust/helpers.c | 20 ++++ rust/kernel/alloc.rs | 7 ++ rust/kernel/lib.rs | 1 + rust/kernel/page.rs | 250 ++++++++++++++++++++++++++++++++++++++++ 5 files changed, 279 insertions(+) diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h index ddb5644d4fd9..0862261cfbed 100644 --- a/rust/bindings/bindings_helper.h +++ b/rust/bindings/bindings_helper.h @@ -20,6 +20,7 @@ /* `bindgen` gets confused at certain things. */ const size_t RUST_CONST_HELPER_ARCH_SLAB_MINALIGN = ARCH_SLAB_MINALIGN; +const size_t RUST_CONST_HELPER_PAGE_SIZE = PAGE_SIZE; const gfp_t RUST_CONST_HELPER_GFP_ATOMIC = GFP_ATOMIC; const gfp_t RUST_CONST_HELPER_GFP_KERNEL = GFP_KERNEL; const gfp_t RUST_CONST_HELPER_GFP_KERNEL_ACCOUNT = GFP_KERNEL_ACCOUNT; diff --git a/rust/helpers.c b/rust/helpers.c index 312b6fcb49d5..72361003ba91 100644 --- a/rust/helpers.c +++ b/rust/helpers.c @@ -25,6 +25,8 @@ #include #include #include +#include +#include #include #include #include @@ -93,6 +95,24 @@ int rust_helper_signal_pending(struct task_struct *t) } EXPORT_SYMBOL_GPL(rust_helper_signal_pending); +struct page *rust_helper_alloc_pages(gfp_t gfp_mask, unsigned int order) +{ + return alloc_pages(gfp_mask, order); +} +EXPORT_SYMBOL_GPL(rust_helper_alloc_pages); + +void *rust_helper_kmap_local_page(struct page *page) +{ + return kmap_local_page(page); +} +EXPORT_SYMBOL_GPL(rust_helper_kmap_local_page); + +void rust_helper_kunmap_local(const void *addr) +{ + kunmap_local(addr); +} +EXPORT_SYMBOL_GPL(rust_helper_kunmap_local); + refcount_t rust_helper_REFCOUNT_INIT(int n) { return (refcount_t)REFCOUNT_INIT(n); diff --git a/rust/kernel/alloc.rs b/rust/kernel/alloc.rs index f1c2c4aa22d2..7ab2b33f19d4 100644 --- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -20,6 +20,13 @@ #[derive(Clone, Copy)] pub struct Flags(u32); +impl Flags { + /// Get the raw representation of this flag. + pub(crate) fn as_raw(self) -> u32 { + self.0 + } +} + impl core::ops::BitOr for Flags { type Output = Self; fn bitor(self, rhs: Self) -> Self::Output { diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs index 7ee807ae4680..048e1662829a 100644 --- a/rust/kernel/lib.rs +++ b/rust/kernel/lib.rs @@ -35,6 +35,7 @@ pub mod kunit; #[cfg(CONFIG_NET)] pub mod net; +pub mod page; pub mod prelude; pub mod print; mod static_assert; diff --git a/rust/kernel/page.rs b/rust/kernel/page.rs new file mode 100644 index 000000000000..121d20066645 --- /dev/null +++ b/rust/kernel/page.rs @@ -0,0 +1,250 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Kernel page allocation and management. + +use crate::{ + alloc::{AllocError, Flags}, + bindings, + error::code::*, + error::Result, + uaccess::UserSliceReader, +}; +use core::ptr::{self, NonNull}; + +/// A bitwise shift for the page size. +pub const PAGE_SHIFT: usize = bindings::PAGE_SHIFT as usize; + +/// The number of bytes in a page. +pub const PAGE_SIZE: usize = bindings::PAGE_SIZE; + +/// A bitmask that gives the page containing a given address. +pub const PAGE_MASK: usize = !(PAGE_SIZE - 1); + +/// A pointer to a page that owns the page allocation. +/// +/// # Invariants +/// +/// The pointer is valid, and has ownership over the page. +pub struct Page { + page: NonNull, +} + +// SAFETY: Pages have no logic that relies on them staying on a given thread, so moving them across +// threads is safe. +unsafe impl Send for Page {} + +// SAFETY: Pages have no logic that relies on them not being accessed concurrently, so accessing +// them concurrently is safe. +unsafe impl Sync for Page {} + +impl Page { + /// Allocates a new page. + /// + /// # Examples + /// + /// Allocate memory for a page. + /// + /// ``` + /// use kernel::page::Page; + /// + /// # fn dox() -> Result<(), kernel::alloc::AllocError> { + /// let page = Page::alloc_page(GFP_KERNEL)?; + /// # Ok(()) } + /// ``` + /// + /// Allocate memory for a page and zero its contents. + /// + /// ``` + /// use kernel::page::Page; + /// + /// # fn dox() -> Result<(), kernel::alloc::AllocError> { + /// let page = Page::alloc_page(GFP_KERNEL | __GFP_ZERO)?; + /// # Ok(()) } + /// ``` + pub fn alloc_page(flags: Flags) -> Result { + // SAFETY: Depending on the value of `gfp_flags`, this call may sleep. Other than that, it + // is always safe to call this method. + let page = unsafe { bindings::alloc_pages(flags.as_raw(), 0) }; + let page = NonNull::new(page).ok_or(AllocError)?; + // INVARIANT: We just successfully allocated a page, so we now have ownership of the newly + // allocated page. We transfer that ownership to the new `Page` object. + Ok(Self { page }) + } + + /// Returns a raw pointer to the page. + pub fn as_ptr(&self) -> *mut bindings::page { + self.page.as_ptr() + } + + /// Runs a piece of code with this page mapped to an address. + /// + /// The page is unmapped when this call returns. + /// + /// # Using the raw pointer + /// + /// It is up to the caller to use the provided raw pointer correctly. The pointer is valid for + /// `PAGE_SIZE` bytes and for the duration in which the closure is called. The pointer might + /// only be mapped on the current thread, and when that is the case, dereferencing it on other + /// threads is UB. Other than that, the usual rules for dereferencing a raw pointer apply: don't + /// cause data races, the memory may be uninitialized, and so on. + /// + /// If multiple threads map the same page at the same time, then they may reference with + /// different addresses. However, even if the addresses are different, the underlying memory is + /// still the same for these purposes (e.g., it's still a data race if they both write to the + /// same underlying byte at the same time). + fn with_page_mapped(&self, f: impl FnOnce(*mut u8) -> T) -> T { + // SAFETY: `page` is valid due to the type invariants on `Page`. + let mapped_addr = unsafe { bindings::kmap_local_page(self.as_ptr()) }; + + let res = f(mapped_addr.cast()); + + // This unmaps the page mapped above. + // + // SAFETY: Since this API takes the user code as a closure, it can only be used in a manner + // where the pages are unmapped in reverse order. This is as required by `kunmap_local`. + // + // In other words, if this call to `kunmap_local` happens when a different page should be + // unmapped first, then there must necessarily be a call to `kmap_local_page` other than the + // call just above in `with_page_mapped` that made that possible. In this case, it is the + // unsafe block that wraps that other call that is incorrect. + unsafe { bindings::kunmap_local(mapped_addr) }; + + res + } + + /// Runs a piece of code with a raw pointer to a slice of this page, with bounds checking. + /// + /// If `f` is called, then it will be called with a pointer that points at `off` bytes into the + /// page, and the pointer will be valid for at least `len` bytes. The pointer is only valid on + /// this task, as this method uses a local mapping. + /// + /// If `off` and `len` refers to a region outside of this page, then this method returns + /// `EINVAL` and does not call `f`. + /// + /// # Using the raw pointer + /// + /// It is up to the caller to use the provided raw pointer correctly. The pointer is valid for + /// `len` bytes and for the duration in which the closure is called. The pointer might only be + /// mapped on the current thread, and when that is the case, dereferencing it on other threads + /// is UB. Other than that, the usual rules for dereferencing a raw pointer apply: don't cause + /// data races, the memory may be uninitialized, and so on. + /// + /// If multiple threads map the same page at the same time, then they may reference with + /// different addresses. However, even if the addresses are different, the underlying memory is + /// still the same for these purposes (e.g., it's still a data race if they both write to the + /// same underlying byte at the same time). + fn with_pointer_into_page( + &self, + off: usize, + len: usize, + f: impl FnOnce(*mut u8) -> Result, + ) -> Result { + let bounds_ok = off <= PAGE_SIZE && len <= PAGE_SIZE && (off + len) <= PAGE_SIZE; + + if bounds_ok { + self.with_page_mapped(move |page_addr| { + // SAFETY: The `off` integer is at most `PAGE_SIZE`, so this pointer offset will + // result in a pointer that is in bounds or one off the end of the page. + f(unsafe { page_addr.add(off) }) + }) + } else { + Err(EINVAL) + } + } + + /// Maps the page and reads from it into the given buffer. + /// + /// This method will perform bounds checks on the page offset. If `offset .. offset+len` goes + /// outside ot the page, then this call returns `EINVAL`. + /// + /// # Safety + /// + /// * Callers must ensure that `dst` is valid for writing `len` bytes. + /// * Callers must ensure that this call does not race with a write to the same page that + /// overlaps with this read. + pub unsafe fn read_raw(&self, dst: *mut u8, offset: usize, len: usize) -> Result { + self.with_pointer_into_page(offset, len, move |src| { + // SAFETY: If `with_pointer_into_page` calls into this closure, then + // it has performed a bounds check and guarantees that `src` is + // valid for `len` bytes. + // + // There caller guarantees that there is no data race. + unsafe { ptr::copy_nonoverlapping(src, dst, len) }; + Ok(()) + }) + } + + /// Maps the page and writes into it from the given buffer. + /// + /// This method will perform bounds checks on the page offset. If `offset .. offset+len` goes + /// outside ot the page, then this call returns `EINVAL`. + /// + /// # Safety + /// + /// * Callers must ensure that `src` is valid for reading `len` bytes. + /// * Callers must ensure that this call does not race with a read or write to the same page + /// that overlaps with this write. + pub unsafe fn write_raw(&self, src: *const u8, offset: usize, len: usize) -> Result { + self.with_pointer_into_page(offset, len, move |dst| { + // SAFETY: If `with_pointer_into_page` calls into this closure, then it has performed a + // bounds check and guarantees that `dst` is valid for `len` bytes. + // + // There caller guarantees that there is no data race. + unsafe { ptr::copy_nonoverlapping(src, dst, len) }; + Ok(()) + }) + } + + /// Maps the page and zeroes the given slice. + /// + /// This method will perform bounds checks on the page offset. If `offset .. offset+len` goes + /// outside ot the page, then this call returns `EINVAL`. + /// + /// # Safety + /// + /// Callers must ensure that this call does not race with a read or write to the same page that + /// overlaps with this write. + pub unsafe fn fill_zero_raw(&self, offset: usize, len: usize) -> Result { + self.with_pointer_into_page(offset, len, move |dst| { + // SAFETY: If `with_pointer_into_page` calls into this closure, then it has performed a + // bounds check and guarantees that `dst` is valid for `len` bytes. + // + // There caller guarantees that there is no data race. + unsafe { ptr::write_bytes(dst, 0u8, len) }; + Ok(()) + }) + } + + /// Copies data from userspace into this page. + /// + /// This method will perform bounds checks on the page offset. If `offset .. offset+len` goes + /// outside ot the page, then this call returns `EINVAL`. + /// + /// Like the other `UserSliceReader` methods, data races are allowed on the userspace address. + /// However, they are not allowed on the page you are copying into. + /// + /// # Safety + /// + /// Callers must ensure that this call does not race with a read or write to the same page that + /// overlaps with this write. + pub unsafe fn copy_from_user_slice_raw( + &self, + reader: &mut UserSliceReader, + offset: usize, + len: usize, + ) -> Result { + self.with_pointer_into_page(offset, len, move |dst| { + // SAFETY: If `with_pointer_into_page` calls into this closure, then it has performed a + // bounds check and guarantees that `dst` is valid for `len` bytes. Furthermore, we have + // exclusive access to the slice since the caller guarantees that there are no races. + reader.read_raw(unsafe { core::slice::from_raw_parts_mut(dst.cast(), len) }) + }) + } +} + +impl Drop for Page { + fn drop(&mut self) { + // SAFETY: By the type invariants, we have ownership of the page and can free it. + unsafe { bindings::__free_pages(self.page.as_ptr(), 0) }; + } +}