From patchwork Thu Apr 4 12:31:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alice Ryhl X-Patchwork-Id: 13617770 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB767CD1292 for ; Thu, 4 Apr 2024 12:32:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3CD9A6B0095; Thu, 4 Apr 2024 08:32:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 37CCE6B0098; Thu, 4 Apr 2024 08:32:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D00D6B0099; Thu, 4 Apr 2024 08:32:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id EB8E96B0095 for ; Thu, 4 Apr 2024 08:32:09 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A6101140E57 for ; Thu, 4 Apr 2024 12:32:09 +0000 (UTC) X-FDA: 81971786778.29.1CD8132 Received: from mail-lf1-f74.google.com (mail-lf1-f74.google.com [209.85.167.74]) by imf09.hostedemail.com (Postfix) with ESMTP id 8BEAF140020 for ; Thu, 4 Apr 2024 12:32:07 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=mUcX+APB; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3xJ0OZgkKCHEPaXRTgnWaVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--aliceryhl.bounces.google.com designates 209.85.167.74 as permitted sender) smtp.mailfrom=3xJ0OZgkKCHEPaXRTgnWaVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--aliceryhl.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712233927; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ysVmEE4mL4bMw4uot+7FLn+nKcsM+tEfIcHG5UH/UW0=; b=r/GnRbgUOI+z3HmXi2wfNOg+yTsc4jvmBhvILB3Yv3iMSAkLOcZUvLcNIcdiyEjjGHHto/ 6PA6Sk4/97BEELZqOWjRk7hveVxmEhprhsTQvzH5MMK4OY6yR6jXxhsfMxlhfEKaP/42Bt OOQvjRA7ExoiJEOz6Yxf+CjmWB8fH7Q= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=mUcX+APB; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3xJ0OZgkKCHEPaXRTgnWaVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--aliceryhl.bounces.google.com designates 209.85.167.74 as permitted sender) smtp.mailfrom=3xJ0OZgkKCHEPaXRTgnWaVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--aliceryhl.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712233927; a=rsa-sha256; cv=none; b=SFvp8FWf1IrQ2UC4HlUXh3DyhkvY/WvzuBRsRHp3WfZvsiY8PCI5v6xUYB6WNJRttryP/k SRmKAJ4yQIItYa0jq1wNjoGugGbpJygQzNWL7RP5ChDy3H7rwyVK+TKBbea0Gq8yIz/HjB 1HBgfBX59j/DUNistbcY211ryCfb+p0= Received: by mail-lf1-f74.google.com with SMTP id 2adb3069b0e04-515d45b24f3so939253e87.0 for ; Thu, 04 Apr 2024 05:32:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712233925; x=1712838725; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ysVmEE4mL4bMw4uot+7FLn+nKcsM+tEfIcHG5UH/UW0=; b=mUcX+APBCpfQ5MJBeYVeC89MPhF8wgLAkUEKSF8dq21moouyaLQpW7mXWmPUrESH4j YE81v/yt5caT6OgSQUTPD6mRCbU5E0eVJN2QzXIK7FYD2yJ51+Gwc8Hjuv4HznWrAsCT R0HOegKt+B2AAJXLogDo7F+OBnxDybWT42MZuiaqqPxfBYiSEGFW7FGIXrj3LpGKn7OO Ie27kN5DLEW4WRXt6rkgz93Hxz2olEFjoJ1DV1RV6dprBCVyC+pnsb+y0eQ9418rXODp ja0zGeqiiW3Ox57mhr+zPNkU2w4WtyLKygP0Y2zmapwpm1c0FWbuaFapFPK1nKhHDhsf TeAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712233925; x=1712838725; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ysVmEE4mL4bMw4uot+7FLn+nKcsM+tEfIcHG5UH/UW0=; b=UYnE3CSzmUGA0jpgPe8H5k3iZAd19vVdCght8/q50xdqOK36gmzgQvksbnsl6Y1YKa gQD0C+wZ+pZfevuetiUMAAd038lIKHXQ6W7xnZ5gGSmbXiMvwrFXu3dyG+boLrRYfyKf YNZ3WCb7G7eZCEUwzCrIYPof8bwYfHiT14qXzbRWGPAuyMUw6ctjmmiKmzpJOhO6pNq/ xZ5bDA3jd74JAz4MndTieY7MAOVIELlb4XGry+Zp50G/U6Fw/DNw/2R6+riP5cx1jZ8j 0M4AglJirjqWdjvbl2lWn9XHpQKUD+f59tLz+tJKXLyFXgHi887rqma3Be7sRIhyid9g ntfQ== X-Forwarded-Encrypted: i=1; AJvYcCXY+a5NmpV1Q0dWaBslcI4/+bBC89mT/mWQVLWnyFzAPCOsMcunvKTGG23D7ICoWAt+YtLjbRZz6/xlkrmQ6av86GY= X-Gm-Message-State: AOJu0YyBUbYF7FrdKpbww00vx4vSlEThcUdtz43SnODF4y338UqLkShu 486f2QTR+dOMI05dGIdHqH85QhbB7HbFRWW7EMbzCEh4Gz/fc3ETGgGt+HKgWHTAqNXJ9/wTwnW uH26jbPTvayXRng== X-Google-Smtp-Source: AGHT+IFrwRynX5aWcZHx3g+jAqvB3lD7LQiPlI1bTzUb5n6NBE4kIOwE9BszfMGy9/GGL1AmsLhqzAfCwM6VGIM= X-Received: from aliceryhl2.c.googlers.com ([fda3:e722:ac3:cc00:68:949d:c0a8:572]) (user=aliceryhl job=sendgmr) by 2002:ac2:46ec:0:b0:516:726f:112a with SMTP id q12-20020ac246ec000000b00516726f112amr2495lfo.11.1712233924900; Thu, 04 Apr 2024 05:32:04 -0700 (PDT) Date: Thu, 04 Apr 2024 12:31:39 +0000 In-Reply-To: <20240404-alice-mm-v4-0-49a84242cf02@google.com> Mime-Version: 1.0 References: <20240404-alice-mm-v4-0-49a84242cf02@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=14993; i=aliceryhl@google.com; h=from:subject:message-id; bh=U2svVG4smSrGB686kZq+YByj41C8BHYczzuL+Plq8Y0=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBmDp297NpKkwdEgApdvuP1O7rHV/xrtpCd46bHP 6bmTytk0TqJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZg6dvQAKCRAEWL7uWMY5 Rs/KD/4n5BXQB8lYJ5u0mP3iZ51DHuumLkvA7xur/HtXbnHKrnQwhSMz2FxWuvA3/6bRxuhN4lQ OSCMRU7R3FcErMKpA4M5FT8DLOZWeSfCFaTCm2W2xMXcAaLZA5wOZnEJKiesuOUjrGRCjlEhQE4 3HTrVrmfzwzJ0Sq8SfCNc5De2f2GBpqRsMTzYXJFerZVueUdsmXEUA6VH+ZuTkwFZznBKx7qt3s 088veuPbyt8jx6lVZieMEiR0OtYzYGwwu8rzH7SDFjRpKpAYcULcdKPBNkKjnXztjeE6aW0z7Ck hESfmDxTOUtswgKOnRbELN9W8CQ5Ah/K3Pidi8IT0yiiu1/s8rzfhEWuS2CJ6yv+/uocTMOLpoP q4/kEbuq4IX3N0uLc7HwtzW1XoI8yWaLytgliRHmBwUuYUmEYG8fPJ2EjJDQP2jVbKGgu2g4RdS 3k26rW/vlVx43cdJjc+0ocGJnMQok6omKdxNHeviMKWswPFJV9RigRTm5q/YfJUWPMiXkgHmQVl aKOK/LebzW6mQFiJyZ1JADfJ282jFfhj9Hul/1GTu1JMy152nS3Tfh2DSqeEXFSGRcLfy8yrjY7 ersWbqLCQu/EDahfc2TP5FaMraCyEBI45T+iLEeABV57jO8PLAg46vjLPJGeq1Fpmj/UzhsvDjF gVaZ6Px10t5Lrcw== X-Mailer: b4 0.13-dev-26615 Message-ID: <20240404-alice-mm-v4-1-49a84242cf02@google.com> Subject: [PATCH v4 1/4] rust: uaccess: add userspace pointers From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Al Viro , Andrew Morton , Kees Cook Cc: Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , " =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= " , Benno Lossin , Andreas Hindborg , Greg Kroah-Hartman , " =?utf-8?q?Arve_Hj=C3=B8?= =?utf-8?q?nnev=C3=A5g?= " , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Christian Brauner X-Rspamd-Queue-Id: 8BEAF140020 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: zb8s81mjm97sb7xddh6pj1yhduf89o45 X-HE-Tag: 1712233927-429642 X-HE-Meta: U2FsdGVkX19cSyEof+JG+dYzlk8SM/KBw3V5DzPbMb0FPZ5XIFr0cq9itaY6rHrcQtev4gZd70EKNhpWbj7jAimOal2zhD8mRP/diZQpZTSQbkRvxvxs1yAdiCfohkydI37iAtxloM88070WdGN1f5CTJWNQZ1gZ1+/+J3tFTd5PI4WVoOC0A0BtvtPrSvIOiIYeSdhu7FQFSb2tXr1Vy7US5w31CVqme+82f+rB+H//6evPa9dXO2iPlgpwnfXc+HEeW0RzzvY5ezaleUXPeIiggvpYN5evrTIGlvY+ol0y381aqbYUWr/d+q0meBPyV6UOejKLdR6X33buYYYTyeqRBdfK6UeyyQxdtG8DVvaTbChLbwoezpuI5Tf1rt3hwU+6S1wBDbLlPSJC3IUw0e0OBMomW9U6TzFTQYhvoKXFZ2Ehrbu+CBBm0ZVqrky1T7enkHZ1AE97DVBhYPit3z+yFzw07bLug+5q20qDyBu/bc9rqdNU7wqOuI2TqFMNhh7CF8rMrWWt5AX9jEqE+/qqfid+XNOqJSjjK7idqkIv9TTWnhYun7+Skovr4sezzDoaFt+L4/ZOlkccj9ILz/X0wjz3R/R56k2zBHi1K/Ch8ixi7Mst0r7DyawpBHLRi5TqC5h42z0BI4wlT4feIjN0Lz4dp26JEvKXZ0k/cg/SGLvIEagDrEcOELb7o6hV245wG8ss4eSe2vavSfJ/QZ5Gu069j+YjDLduzGrHweyKjlM284bF+sRmE9oB7gSttM7D8i5eBoucZckh3t7h1JKbtjKvz0kyAg9Sm9a3DFJ6Q7ZFTtksqm9Pk1kguQ3P5BPyQ0UGIFeqs/J5yCGfIMhDTgzgo7lK+Dp4qsquX5iTVxyV74cEcGTyS3+aE5R8NBxVUfUAq3F9QI67Yn+7Ae4gduFP6m/2DPoSFJvptybJzGcnMyMXA1xsib4kjtvqnw9GxN5qtao+A2VpKlY BKg+KBlN pGYtFhjzv4PhlJ7cTTzsdFGFqou4zq3WeHZpyf/ea37uBf/IYF/gwvPUTkq1/kyawBcXRBGHEtvyzdxNIaz7pyfthupNig0AkofRblMsONxLuXdhU5rSLrqKtEc0Mx9FNCnaiABtSUHzvlt9zRC33JfKNTPkI/ZsHMK1bc3WXK4PkiFJj/TqWrk1xqc5vPHwzacOjhs1pAbaVoKgoU/TZSeC9r7Tw21Sm6BhTCLnlD9gmEv4YgHvm0esEmrkmHnuYwbwAyaJg3NatdrM/LumeszVSkWMOfe6+hrC71IRLZLpaqqpYXJLSP9CVzAoQiZGM2X7U2NwV8Gndq8bnKLX1hKoANkidoLy0PdNpj4e4BwwtzjgkDsYByrsweuvGG0v0x+HM5S1gNOtkLrn7mFpBcvmj8PON+JTqEexs8dW8WqLxDl4fCX8vVRETJy/YtOVVeNLKtOAhJK6W4u7+tA7bjYKlPXnf4SFJMfO+Gt42UMNhn0SCJ+jNKuSDSUtcAeYJtQTeVmTpkwu+ea58GXVNC3bJduo2FHdgIqFSmC3AQkhJLHU5e1zJCGeltSffdblsIwKmPfpWq3IBhcmLyjZ3/o4dcTZHzqrOMI23NmsParKH+6LnEkWJA/0iU2qspYHJjw4hG4ka5m9lCKRaY5voGo48iglUYYsfwi/9JGRz5kZDwvkbZZ9CyAbW5gSo0Srk0lINrrSWx7gCg2/Zyr4DDMc1p9FBtg9j1Y7f2UEVisMBxh28kMxHsj/3t2FiDvU+/bnn+vP5jqiTWZXulSjqsVT6+w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Wedson Almeida Filho A pointer to an area in userspace memory, which can be either read-only or read-write. All methods on this struct are safe: attempting to read or write on bad addresses (either out of the bound of the slice or unmapped addresses) will return `EFAULT`. Concurrent access, *including data races to/from userspace memory*, is permitted, because fundamentally another userspace thread/process could always be modifying memory at the same time (in the same way that userspace Rust's `std::io` permits data races with the contents of files on disk). In the presence of a race, the exact byte values read/written are unspecified but the operation is well-defined. Kernelspace code should validate its copy of data after completing a read, and not expect that multiple reads of the same address will return the same value. These APIs are designed to make it difficult to accidentally write TOCTOU bugs. Every time you read from a memory location, the pointer is advanced by the length so that you cannot use that reader to read the same memory location twice. Preventing double-fetches avoids TOCTOU bugs. This is accomplished by taking `self` by value to prevent obtaining multiple readers on a given `UserSlicePtr`, and the readers only permitting forward reads. If double-fetching a memory location is necessary for some reason, then that is done by creating multiple readers to the same memory location. Constructing a `UserSlicePtr` performs no checks on the provided address and length, it can safely be constructed inside a kernel thread with no current userspace process. Reads and writes wrap the kernel APIs `copy_from_user` and `copy_to_user`, which check the memory map of the current process and enforce that the address range is within the user range (no additional calls to `access_ok` are needed). This code is based on something that was originally written by Wedson on the old rust branch. It was modified by Alice by removing the `IoBufferReader` and `IoBufferWriter` traits, and various other changes. Signed-off-by: Wedson Almeida Filho Co-developed-by: Alice Ryhl Signed-off-by: Alice Ryhl --- rust/helpers.c | 14 +++ rust/kernel/lib.rs | 1 + rust/kernel/uaccess.rs | 311 +++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 326 insertions(+) diff --git a/rust/helpers.c b/rust/helpers.c index 70e59efd92bc..312b6fcb49d5 100644 --- a/rust/helpers.c +++ b/rust/helpers.c @@ -38,6 +38,20 @@ __noreturn void rust_helper_BUG(void) } EXPORT_SYMBOL_GPL(rust_helper_BUG); +unsigned long rust_helper_copy_from_user(void *to, const void __user *from, + unsigned long n) +{ + return copy_from_user(to, from, n); +} +EXPORT_SYMBOL_GPL(rust_helper_copy_from_user); + +unsigned long rust_helper_copy_to_user(void __user *to, const void *from, + unsigned long n) +{ + return copy_to_user(to, from, n); +} +EXPORT_SYMBOL_GPL(rust_helper_copy_to_user); + void rust_helper_mutex_lock(struct mutex *lock) { mutex_lock(lock); diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs index be68d5e567b1..37f84223b83f 100644 --- a/rust/kernel/lib.rs +++ b/rust/kernel/lib.rs @@ -49,6 +49,7 @@ pub mod task; pub mod time; pub mod types; +pub mod uaccess; pub mod workqueue; #[doc(hidden)] diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs new file mode 100644 index 000000000000..3f8ad4dc13c4 --- /dev/null +++ b/rust/kernel/uaccess.rs @@ -0,0 +1,311 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Slices to user space memory regions. +//! +//! C header: [`include/linux/uaccess.h`](srctree/include/linux/uaccess.h) + +use crate::{bindings, error::code::*, error::Result}; +use alloc::vec::Vec; +use core::ffi::{c_ulong, c_void}; +use core::mem::MaybeUninit; + +/// A pointer to an area in userspace memory, which can be either read-only or +/// read-write. +/// +/// All methods on this struct are safe: attempting to read or write on bad +/// addresses (either out of the bound of the slice or unmapped addresses) will +/// return `EFAULT`. Concurrent access, *including data races to/from userspace +/// memory*, is permitted, because fundamentally another userspace +/// thread/process could always be modifying memory at the same time (in the +/// same way that userspace Rust's [`std::io`] permits data races with the +/// contents of files on disk). In the presence of a race, the exact byte values +/// read/written are unspecified but the operation is well-defined. Kernelspace +/// code should validate its copy of data after completing a read, and not +/// expect that multiple reads of the same address will return the same value. +/// +/// These APIs are designed to make it difficult to accidentally write TOCTOU +/// (time-of-check to time-of-use) bugs. Every time a memory location is read, +/// the reader's position is advanced by the read length and the next read will +/// start from there. This helps prevent accidentally reading the same location +/// twice and causing a TOCTOU bug. +/// +/// Creating a [`UserSliceReader`] and/or [`UserSliceWriter`] consumes the +/// `UserSlice`, helping ensure that there aren't multiple readers or writers to +/// the same location. +/// +/// If double-fetching a memory location is necessary for some reason, then that +/// is done by creating multiple readers to the same memory location, e.g. using +/// [`clone_reader`]. +/// +/// # Examples +/// +/// Takes a region of userspace memory from the current process, and modify it +/// by adding one to every byte in the region. +/// +/// ```no_run +/// use alloc::vec::Vec; +/// use core::ffi::c_void; +/// use kernel::error::Result; +/// use kernel::uaccess::UserSlice; +/// +/// fn bytes_add_one(uptr: *mut c_void, len: usize) -> Result<()> { +/// let (read, mut write) = UserSlice::new(uptr, len).reader_writer(); +/// +/// let mut buf = Vec::new(); +/// read.read_all(&mut buf)?; +/// +/// for b in &mut buf { +/// *b = b.wrapping_add(1); +/// } +/// +/// write.write_slice(&buf)?; +/// Ok(()) +/// } +/// ``` +/// +/// Example illustrating a TOCTOU (time-of-check to time-of-use) bug. +/// +/// ```no_run +/// use alloc::vec::Vec; +/// use core::ffi::c_void; +/// use kernel::error::{code::EINVAL, Result}; +/// use kernel::uaccess::UserSlice; +/// +/// /// Returns whether the data in this region is valid. +/// fn is_valid(uptr: *mut c_void, len: usize) -> Result { +/// let read = UserSlice::new(uptr, len).reader(); +/// +/// let mut buf = Vec::new(); +/// read.read_all(&mut buf)?; +/// +/// todo!() +/// } +/// +/// /// Returns the bytes behind this user pointer if they are valid. +/// fn get_bytes_if_valid(uptr: *mut c_void, len: usize) -> Result> { +/// if !is_valid(uptr, len)? { +/// return Err(EINVAL); +/// } +/// +/// let read = UserSlice::new(uptr, len).reader(); +/// +/// let mut buf = Vec::new(); +/// read.read_all(&mut buf)?; +/// +/// // THIS IS A BUG! The bytes could have changed since we checked them. +/// // +/// // To avoid this kind of bug, don't call `UserSlice::new` multiple +/// // times with the same address. +/// Ok(buf) +/// } +/// ``` +/// +/// [`std::io`]: https://doc.rust-lang.org/std/io/index.html +/// [`clone_reader`]: UserSliceReader::clone_reader +pub struct UserSlice { + ptr: *mut c_void, + length: usize, +} + +impl UserSlice { + /// Constructs a user slice from a raw pointer and a length in bytes. + /// + /// Constructing a [`UserSlice`] performs no checks on the provided address + /// and length, it can safely be constructed inside a kernel thread with no + /// current userspace process. Reads and writes wrap the kernel APIs + /// `copy_from_user` and `copy_to_user`, which check the memory map of the + /// current process and enforce that the address range is within the user + /// range (no additional calls to `access_ok` are needed). + /// + /// Callers must be careful to avoid time-of-check-time-of-use + /// (TOCTOU) issues. The simplest way is to create a single instance of + /// [`UserSlice`] per user memory block as it reads each byte at + /// most once. + pub fn new(ptr: *mut c_void, length: usize) -> Self { + UserSlice { ptr, length } + } + + /// Reads the entirety of the user slice, appending it to the end of the + /// provided buffer. + /// + /// Fails with `EFAULT` if the read happens on a bad address. + pub fn read_all(self, buf: &mut Vec) -> Result { + self.reader().read_all(buf) + } + + /// Constructs a [`UserSliceReader`]. + pub fn reader(self) -> UserSliceReader { + UserSliceReader { + ptr: self.ptr, + length: self.length, + } + } + + /// Constructs a [`UserSliceWriter`]. + pub fn writer(self) -> UserSliceWriter { + UserSliceWriter { + ptr: self.ptr, + length: self.length, + } + } + + /// Constructs both a [`UserSliceReader`] and a [`UserSliceWriter`]. + /// + /// Usually when this is used, you will first read the data, and then + /// overwrite it afterwards. + pub fn reader_writer(self) -> (UserSliceReader, UserSliceWriter) { + ( + UserSliceReader { + ptr: self.ptr, + length: self.length, + }, + UserSliceWriter { + ptr: self.ptr, + length: self.length, + }, + ) + } +} + +/// A reader for [`UserSlice`]. +/// +/// Used to incrementally read from the user slice. +pub struct UserSliceReader { + ptr: *mut c_void, + length: usize, +} + +impl UserSliceReader { + /// Skip the provided number of bytes. + /// + /// Returns an error if skipping more than the length of the buffer. + pub fn skip(&mut self, num_skip: usize) -> Result { + // Update `self.length` first since that's the fallible part of this + // operation. + self.length = self.length.checked_sub(num_skip).ok_or(EFAULT)?; + self.ptr = self.ptr.wrapping_byte_add(num_skip); + Ok(()) + } + + /// Create a reader that can access the same range of data. + /// + /// Reading from the clone does not advance the current reader. + /// + /// The caller should take care to not introduce TOCTOU issues, as described + /// in the documentation for [`UserSlice`]. + pub fn clone_reader(&self) -> UserSliceReader { + UserSliceReader { + ptr: self.ptr, + length: self.length, + } + } + + /// Returns the number of bytes left to be read from this reader. + /// + /// Note that even reading less than this number of bytes may fail. + pub fn len(&self) -> usize { + self.length + } + + /// Returns `true` if no data is available in the io buffer. + pub fn is_empty(&self) -> bool { + self.length == 0 + } + + /// Reads raw data from the user slice into a kernel buffer. + /// + /// Fails with `EFAULT` if the read happens on a bad address. + pub fn read_raw(&mut self, out: &mut [MaybeUninit]) -> Result { + let len = out.len(); + let out_ptr = out.as_mut_ptr().cast::(); + if len > self.length { + return Err(EFAULT); + } + let Ok(len_ulong) = c_ulong::try_from(len) else { + return Err(EFAULT); + }; + // SAFETY: The caller promises that `out` is valid for writing `len` bytes. + let res = unsafe { bindings::copy_from_user(out_ptr, self.ptr, len_ulong) }; + if res != 0 { + return Err(EFAULT); + } + // Userspace pointers are not directly dereferencable by the kernel, so + // we cannot use `add`, which has C-style rules for defined behavior. + self.ptr = self.ptr.wrapping_byte_add(len); + self.length -= len; + Ok(()) + } + + /// Reads raw data from the user slice into a kernel buffer. + /// + /// Fails with `EFAULT` if the read happens on a bad address. + pub fn read_slice(&mut self, out: &mut [u8]) -> Result { + // SAFETY: The types are compatible and `read_raw` doesn't write + // uninitialized bytes to `out`. + let out = unsafe { &mut *(out as *mut [u8] as *mut [MaybeUninit]) }; + self.read_raw(out) + } + + /// Reads the entirety of the user slice, appending it to the end of the + /// provided buffer. + /// + /// Fails with `EFAULT` if the read happens on a bad address. + pub fn read_all(mut self, buf: &mut Vec) -> Result { + let len = self.length; + buf.try_reserve(len)?; + + // The call to `try_reserve` was successful, so the spare capacity is at + // least `len` bytes long. + self.read_raw(&mut buf.spare_capacity_mut()[..len])?; + + // SAFETY: Since the call to `read_raw` was successful, so the next + // `len` bytes of the vector have been initialized. + unsafe { buf.set_len(buf.len() + len) }; + Ok(()) + } +} + +/// A writer for [`UserSlice`]. +/// +/// Used to incrementally write into the user slice. +pub struct UserSliceWriter { + ptr: *mut c_void, + length: usize, +} + +impl UserSliceWriter { + /// Returns the amount of space remaining in this buffer. + /// + /// Note that even writing less than this number of bytes may fail. + pub fn len(&self) -> usize { + self.length + } + + /// Returns `true` if no more data can be written to this buffer. + pub fn is_empty(&self) -> bool { + self.length == 0 + } + + /// Writes raw data to this user pointer from a kernel buffer. + /// + /// Fails with `EFAULT` if the write happens on a bad address. + pub fn write_slice(&mut self, data: &[u8]) -> Result { + let len = data.len(); + let data_ptr = data.as_ptr().cast::(); + if len > self.length { + return Err(EFAULT); + } + let Ok(len_ulong) = c_ulong::try_from(len) else { + return Err(EFAULT); + }; + let res = unsafe { bindings::copy_to_user(self.ptr, data_ptr, len_ulong) }; + if res != 0 { + return Err(EFAULT); + } + // Userspace pointers are not directly dereferencable by the kernel, so + // we cannot use `add`, which has C-style rules for defined behavior. + self.ptr = self.ptr.wrapping_byte_add(len); + self.length -= len; + Ok(()) + } +} From patchwork Thu Apr 4 12:31:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alice Ryhl X-Patchwork-Id: 13617771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1FCECD1284 for ; Thu, 4 Apr 2024 12:32:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 818F16B0099; Thu, 4 Apr 2024 08:32:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 79B946B009A; Thu, 4 Apr 2024 08:32:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5EE766B009B; Thu, 4 Apr 2024 08:32:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3B9B16B0099 for ; Thu, 4 Apr 2024 08:32:11 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E9C50C0AF0 for ; Thu, 4 Apr 2024 12:32:10 +0000 (UTC) X-FDA: 81971786820.25.AF098D5 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf02.hostedemail.com (Postfix) with ESMTP id EE57A8001D for ; Thu, 4 Apr 2024 12:32:08 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=qBcLtrcb; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf02.hostedemail.com: domain of 3yJ0OZgkKCHUTebVXkraeZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--aliceryhl.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3yJ0OZgkKCHUTebVXkraeZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--aliceryhl.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712233929; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XerkH2ynkf1vxp+fsqJ0qN8AZG46ak50/ECqEN32U0w=; b=HS514mN0QDZIkk3K75vYVJorJFLEJbQX6CAb6tLcV5pQMJ1gAsrIs5iVjeveynApAbjr+U bM0wSL9vo/KxyxY4CBJU9sN2BP8pMcdzQyog6S3GRDBY2v6QMVHPmrbbFhUT700CIIZfeM WGO/8bKjHPaPBXLe/NaSbHYRbXY/fvQ= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=qBcLtrcb; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf02.hostedemail.com: domain of 3yJ0OZgkKCHUTebVXkraeZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--aliceryhl.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3yJ0OZgkKCHUTebVXkraeZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--aliceryhl.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712233929; a=rsa-sha256; cv=none; b=Lwu8UVKPodkC3aJ4PBX9sxL5wA2WGxX81EjjqNEPwAR+TIKiP775oVkIQFU38BHe40fuJr NqQ3jc8EopzerGPvVHaYtiGyTFndPyNsX3+1NY9LygzB6Bi634Ls7VaFddVUx0Uz0VFjPb xUFx2bVfyPob7CbLBcJWEkjlkCjYIm4= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-610b96c8ca2so15778447b3.2 for ; Thu, 04 Apr 2024 05:32:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712233928; x=1712838728; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XerkH2ynkf1vxp+fsqJ0qN8AZG46ak50/ECqEN32U0w=; b=qBcLtrcbU8xwNLqyB27HuE7Q+D0az8ySN+ri0lVbUH/sfM3EUHzj3+6bCpz8td+zSP f3weBmAelrX4NcY7aU9/STd10cUxSMV4PfC5cTBTFT1eYk2gJITzQ1s/wWxFpQtyD71O 3FuFwjeH9FE9pW63UhyectFC1ard+R15GHRTAs8CPdjIB38jDleUFQLjRPYrku3VBivH dndVX5ngtf8n87H030zXQYWOTuxoNxe/n17LTzyqPdV6Lxkj+fB3Bu975rMZtqJ3COZ+ LMb0CwxiQzuRvdziGxtxj8Q4GANxvsMsVyhFY8/c3ORc3zcvn6enNROCS6jYuo8oX7vK 9mFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712233928; x=1712838728; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XerkH2ynkf1vxp+fsqJ0qN8AZG46ak50/ECqEN32U0w=; b=L+cDnuyt8SdYI8VMo0gOrfBK48kAPjlYrI+d+xP59nwwISBVLEKA4ab+h604Mt2wJQ /LOaMCqbMznWh0L/fm9AvPJor4TS9xPNQ92UsnC/Hjc3lKKLUVhR4xwggl9YoyUfJ27i aKUnesH8FbvB/sCcQjYTLCddaMmXoi17kZd78XEyQ8TyoAtfUbDlsRSANTNdm3BodBgQ dcuG/1yPUQ7uRMbT/O3zrnsjRDwtvfl+5ju8ViKVi+z1jgObsZ0Lg4LyXBGV7mzmMPih 0BXV8fB0SQimxCpt3Q6SZz+gdSZntcK/UdScsFQfAIjC72yT7dUi6GKr+x6j2fzsBtET /TGg== X-Forwarded-Encrypted: i=1; AJvYcCXZpEeE8TWTwtFsg16XI5QN2IsDATrqhgGIERJ8uGiT4Vw8MBnzxkppMlDdOY7NNZuq6rlf2NxQ4+yna4MTrdzzxa8= X-Gm-Message-State: AOJu0Yz9mDoFajkRxtFCfilhCgFDy09jZhC6S/bGWSnz+t61vdAh9UdF m8o9e2lPxnQnH1aWeJWSeRsJ//kLo+JWEMu4W2V1ZS6krG2Uj2WxZjAbSKEfGhCUsnq3Yr10KmQ cuWgtgVrloJBgbw== X-Google-Smtp-Source: AGHT+IG0Bsq2KxhVPOQpSkN7RapTzL57jEs123Iy4spqtUcqzx174FkXstUUKG35PgzUW7U2kryjNQK8BX3zoew= X-Received: from aliceryhl2.c.googlers.com ([fda3:e722:ac3:cc00:68:949d:c0a8:572]) (user=aliceryhl job=sendgmr) by 2002:a05:6902:124a:b0:dc2:1f34:fac4 with SMTP id t10-20020a056902124a00b00dc21f34fac4mr708607ybu.2.1712233928129; Thu, 04 Apr 2024 05:32:08 -0700 (PDT) Date: Thu, 04 Apr 2024 12:31:40 +0000 In-Reply-To: <20240404-alice-mm-v4-0-49a84242cf02@google.com> Mime-Version: 1.0 References: <20240404-alice-mm-v4-0-49a84242cf02@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=5074; i=aliceryhl@google.com; h=from:subject:message-id; bh=fSgHfw/zkWrxiL7qQcuIRBGXo9AsMjuelkCO78cyAIg=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBmDp2+8k+koOL0GcIB850DW3ouuJxGlPR2MopoO q15ulz7hZOJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZg6dvgAKCRAEWL7uWMY5 RgjkEACUB3R7Sn1Mx7xhemoU2eWMaNTf+LDF5Kpe7cqJ9xnqMJgUM7WKETGwNBOqbI3f/iZTFjx aiQywDbimWtwIc1Drme1GlbihXwW0ut7WIvv+XyiIlfzJ6zyEg43G6KnkHgFJrDQNJUJasQnF1U rZ8fqDaarX/EyfixSMWxDOy5iBCJUUB9TejXbUP3TN8pM+eeWUcDj2Ri75ZLBBRyj0ZFHmNT+Rk CsyqnI08i6aYsXalL/ZbGJcbr5ZzMh9FHkY7WbwlX9kViCto4ryG5uD/OT/XqJhjfDSeoroEtpF 0Pow3pGanpm1TogMELV6Al/8AYQxB2Ohtkto9Sraez/KsIwnHLWZUn7wy/Fy1122vwqDEDshM+a 9ZnHdbaYeepLUl07i5fbfQJF+vspuYl4Vlw0Vy6wKf92sanv7LX0Ea51dK3p9t5ISZSCj+fu2po UKW8atf6Zd61GG+mkP56/s6RoAmb8FkFETpVa1TJr5E2AK08IaccRyTuge6riMTD3pco/tngGqK gUbxYKsknZcq6egCRWt2fr/ghJggswWqRlYTMwQfkqZbRxngZACZ4spgLjMfBhj/czoN1673bJ4 GtDQCFA/FKvHXtKYpNaZPEbk0+O9ra4aNzGotNcsoVvFItYdR848PwQRYv2eh1kDTlnl46aFWAM NDdJ1cG5xmBxfGw== X-Mailer: b4 0.13-dev-26615 Message-ID: <20240404-alice-mm-v4-2-49a84242cf02@google.com> Subject: [PATCH v4 2/4] uaccess: always export _copy_[from|to]_user with CONFIG_RUST From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Al Viro , Andrew Morton , Kees Cook Cc: Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , " =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= " , Benno Lossin , Andreas Hindborg , Greg Kroah-Hartman , " =?utf-8?q?Arve_Hj=C3=B8?= =?utf-8?q?nnev=C3=A5g?= " , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Christian Brauner X-Rspamd-Queue-Id: EE57A8001D X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: skj7xsh6o7dh8to6onc6jj3ppc895scm X-HE-Tag: 1712233928-480138 X-HE-Meta: U2FsdGVkX1+wfIGWiG4iz6Erf1c6HPVcIePSbTkOTUTN48T2JGgvwVnox6QmqiGRjBaSld5uu2zv0z9HtmefifmH93jAKAbnUWuzcTku+kbS2qUhZ24+/WHLei0sfzdPnywB4w8AqnfdSNx7Q059cu1EwkQaZ0OUUlB63eTipODPiAhQDDh4SEG2CqrCq7mtWyfnu1Ped7z7ag3Vm+l8cUFij3vGiyW57cswR4TEdmuXEd521AgqZOw+F9QBba3mLcO5E1murmecjNoIJ4C4fypHvJ8QYlWta2ol09MRNg0Ns4yCzRBacjKjudbXHuijTYABIYNXoTpWIGJ7+QEZgoYNSKUwJAn/NYjipETgyeCYwhsJOECqzotjW49sq8g7yUbA33t+4gSnN2xJQCLnBuIT+BqA5U1ZlB2Kkpis8BTkkYlOa1wotjjk/aI0MINx6h1nSbajzCbFTH8CiEnHXbyHsgVaTeZke0ugHjsrR/6PwnbYvWwLLyKW1iuzWgODUgm2jdTyRZUJhAhKDyCpM6H5lteyqlTEP9wxRgGHlK5aFYPXCGrk0ojJ+O2/9SwNTsq6qPyU+4jLBC2ZPQX0/jWDj/6voibfIKJ1x48yt5EPedLBcTe7aHZ25l8U3pg4BOXP2WHN1RImkPjRcV5AQVy2r+BiKBUjPELAQWvlE2Bdf/g6BaQJNoBcd5fh0haVveV8skof9e6yjAhjtDipNifc+hgwyv+a5MxFgGoh0eL7c8yYYN3NipmitggoLlYlcDiR9jsxUBZwjyUEVxWZci7tEndaqZVcxEhI13yfXSoui8oetFUQTIjhq2coV0JW4zW0M8Jyb5KNCYCZd+jX1iNX3KqK35j4pbTAcLaLph+BZkbz/2O5NvuYEsutnOabA+zp1uDgyHeTjOIa9plJHsPifKQwv+3NchcxPcBFiI5AHIM56MYQUFZ7q45Zq+gnVqI1Dz2awfhb8uo8lFg rcv33ec2 yakB8eaI4ndT8g9+0IBI+A+J/p70dbro5PSFGenpuddqXkw3Ibv1LnmzYXH33SFXEabmJYwQLTDIrDxEFAZQtlLESii6aIVPruQzJdLEiXdPj2F2zwf7sbpV83j6qYimMjdhuP1dfGfuMRg4Z7TXp59T9Nsq/eAAMVVUQJGFSBtYkKYh9JZmvwsnVRS7adSY8asw270trtQaVIltecUyI7vSVOIO0GX6Jbvi/Xj13krMNzXnHDNyoB8ub6nnfTw2SG/aTpGeZ44UhzimtAbBiADQVRPUbmEtltzlyA8kdd5BW7OO4jgiqzrZNL8hXODCbUhQbCCAAtft3N1rlyJjq/CiTxDyOOaanVeSpD9ptZoWEkQ/k3VBkKfu53bKbbY5axzm8rgbrZ1L7x9pDsoMIeRAS++hwD8auGKzO4fM4YA/Rv0vEWDKKIdCp6y33lrFrawrxxEHDl0+X3qWPnoHkocgRrwDtQ9c9ND9ElmKBk2ei1o+3UXaaKIa4hIDZIIzpFdXF+ONC6HJHgg2/7xQKhnaUaJ68Mo/7wCgv2OniHKbaw/lbY8EWrTpFADd6Y5gQVkDvY8seRWxv2IEf0y3Hl5WNiVvdwSvMbuVPM7Xl5rmR20sXSt8oKw76rzOjZZTGth+hT0WO7kYkbHZYKAG4EZ16MkBVy8Zzri4l9HoqGV4r3h3dCh/Iy8DJD9JRwO8H+sAK9cv+L815ZGjjBNT9RhPb9hMtZ4HWxFmZB68FkXsFA+dr2vqV7gNFPa8BY3ekQ+TgsZfw6eDd3sj9VqkUuljr/w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Arnd Bergmann Rust code needs to be able to access _copy_from_user and _copy_to_user so that it can skip the check_copy_size check in cases where the length is known at compile-time, mirroring the logic for when C code will skip check_copy_size. To do this, we ensure that exported versions of these methods are available when CONFIG_RUST is enabled. Alice has verified that this patch passes the CONFIG_TEST_USER_COPY test on x86 using the Android cuttlefish emulator. Signed-off-by: Arnd Bergmann Tested-by: Alice Ryhl Reviewed-by: Boqun Feng Signed-off-by: Alice Ryhl --- include/linux/uaccess.h | 38 ++++++++++++++++++++++++-------------- lib/usercopy.c | 30 ++++-------------------------- 2 files changed, 28 insertions(+), 40 deletions(-) diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 3064314f4832..2ebfce98b5cc 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include @@ -138,13 +139,18 @@ __copy_to_user(void __user *to, const void *from, unsigned long n) return raw_copy_to_user(to, from, n); } -#ifdef INLINE_COPY_FROM_USER static inline __must_check unsigned long -_copy_from_user(void *to, const void __user *from, unsigned long n) +_inline_copy_from_user(void *to, const void __user *from, unsigned long n) { unsigned long res = n; might_fault(); if (!should_fail_usercopy() && likely(access_ok(from, n))) { + /* + * Ensure that bad access_ok() speculation will not + * lead to nasty side effects *after* the copy is + * finished: + */ + barrier_nospec(); instrument_copy_from_user_before(to, from, n); res = raw_copy_from_user(to, from, n); instrument_copy_from_user_after(to, from, n, res); @@ -153,14 +159,11 @@ _copy_from_user(void *to, const void __user *from, unsigned long n) memset(to + (n - res), 0, res); return res; } -#else extern __must_check unsigned long _copy_from_user(void *, const void __user *, unsigned long); -#endif -#ifdef INLINE_COPY_TO_USER static inline __must_check unsigned long -_copy_to_user(void __user *to, const void *from, unsigned long n) +_inline_copy_to_user(void __user *to, const void *from, unsigned long n) { might_fault(); if (should_fail_usercopy()) @@ -171,25 +174,32 @@ _copy_to_user(void __user *to, const void *from, unsigned long n) } return n; } -#else extern __must_check unsigned long _copy_to_user(void __user *, const void *, unsigned long); -#endif static __always_inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n) { - if (check_copy_size(to, n, false)) - n = _copy_from_user(to, from, n); - return n; + if (!check_copy_size(to, n, false)) + return n; +#ifdef INLINE_COPY_FROM_USER + return _inline_copy_from_user(to, from, n); +#else + return _copy_from_user(to, from, n); +#endif } static __always_inline unsigned long __must_check copy_to_user(void __user *to, const void *from, unsigned long n) { - if (check_copy_size(from, n, true)) - n = _copy_to_user(to, from, n); - return n; + if (!check_copy_size(from, n, true)) + return n; + +#ifdef INLINE_COPY_TO_USER + return _inline_copy_to_user(to, from, n); +#else + return _copy_to_user(to, from, n); +#endif } #ifndef copy_mc_to_kernel diff --git a/lib/usercopy.c b/lib/usercopy.c index d29fe29c6849..de7f30618293 100644 --- a/lib/usercopy.c +++ b/lib/usercopy.c @@ -7,40 +7,18 @@ /* out-of-line parts */ -#ifndef INLINE_COPY_FROM_USER +#if !defined(INLINE_COPY_FROM_USER) || defined(CONFIG_RUST) unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n) { - unsigned long res = n; - might_fault(); - if (!should_fail_usercopy() && likely(access_ok(from, n))) { - /* - * Ensure that bad access_ok() speculation will not - * lead to nasty side effects *after* the copy is - * finished: - */ - barrier_nospec(); - instrument_copy_from_user_before(to, from, n); - res = raw_copy_from_user(to, from, n); - instrument_copy_from_user_after(to, from, n, res); - } - if (unlikely(res)) - memset(to + (n - res), 0, res); - return res; + return _inline_copy_from_user(to, from, n); } EXPORT_SYMBOL(_copy_from_user); #endif -#ifndef INLINE_COPY_TO_USER +#if !defined(INLINE_COPY_TO_USER) || defined(CONFIG_RUST) unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n) { - might_fault(); - if (should_fail_usercopy()) - return n; - if (likely(access_ok(to, n))) { - instrument_copy_to_user(to, from, n); - n = raw_copy_to_user(to, from, n); - } - return n; + return _inline_copy_to_user(to, from, n); } EXPORT_SYMBOL(_copy_to_user); #endif From patchwork Thu Apr 4 12:31:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alice Ryhl X-Patchwork-Id: 13617774 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A89A7CD1296 for ; Thu, 4 Apr 2024 12:32:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 299816B009A; Thu, 4 Apr 2024 08:32:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 246BD6B009C; Thu, 4 Apr 2024 08:32:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C01D6B009D; Thu, 4 Apr 2024 08:32:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D82516B009A for ; Thu, 4 Apr 2024 08:32:13 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 9FBB4120411 for ; Thu, 4 Apr 2024 12:32:13 +0000 (UTC) X-FDA: 81971786946.10.DB74003 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf29.hostedemail.com (Postfix) with ESMTP id B409B120020 for ; Thu, 4 Apr 2024 12:32:11 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ExQZeNXW; spf=pass (imf29.hostedemail.com: domain of 3yp0OZgkKCHcVgdXZmtcgbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--aliceryhl.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3yp0OZgkKCHcVgdXZmtcgbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--aliceryhl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712233931; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=taakBWKWpiwoIIU3/+JYifQ+UJK0O86XOZIW+qgXD8k=; b=ShOQf/pg1hK2eMmQBxY9XpK9uWFZVJiT75P0rdM5PdMoK9Cy3HOdgM0kBiPJb/nPN1hG32 gxezPYjfw9hImfciaTlcx/rPEeAQPF8X3b/Pc+TetBmjPK4zWIsC0cUUez5cx2qs+S6tls WRAg9BsjOv/bz/G1dpONTbWw/9LJ2EE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712233931; a=rsa-sha256; cv=none; b=pJaCNwG11lmrWrnp3TKko+ABLegPOsK44iLLlrHcGxWu6hVUqqB2EOvNHnDexz3oJ754yU PMvLkyMjO8OZyiBd5RHGWlCorkKA1pvbgNAEC6QxcVhbMIRKDiRwQj4XkjLFemRHimbNvW S9vzkLfXy/SQC795N7OWOMPzr4uKHMA= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ExQZeNXW; spf=pass (imf29.hostedemail.com: domain of 3yp0OZgkKCHcVgdXZmtcgbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--aliceryhl.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3yp0OZgkKCHcVgdXZmtcgbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--aliceryhl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-60ccc3cfa39so14690727b3.2 for ; Thu, 04 Apr 2024 05:32:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712233931; x=1712838731; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=taakBWKWpiwoIIU3/+JYifQ+UJK0O86XOZIW+qgXD8k=; b=ExQZeNXWJmWWJEt3B9MSZbZ7R4VTel0hPBW0DueEuWXXKlkJMFhYh8Z/j3OCjfiUTh XGm4veaNGBDJJUc2gAmntPPANoEJ33fpE5aFT/5uV8YctaXx9T2dD4WOZjP6HmrvV8T2 iI0zDqpSBaXfWrPboFwq/JCmogKf7lydMhEZl7MJGgdPca9cSTek/1m3D+5uu8FHKZDk dtd+uc9AoBmP0gQ6b7qns9rwI80uUaFoBymmvYsQu1+90n3PS/OwsrOz6oQki2/V0SnL F1mZ/43YET7gIcobo1IKll196EqPprHj9yA52OJLquzUIMQT/YD/ptACsCEX5dWWGnPn R4Ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712233931; x=1712838731; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=taakBWKWpiwoIIU3/+JYifQ+UJK0O86XOZIW+qgXD8k=; b=NS4le5WyfLG0LfAHAdqY6RpcbyENWnkx4mqNR96nBbuF4yedbqrCQk4X/91FHeyu+F ibGLC6eh05MfCfKDUd60TltWC31qPk0vSgLZc08rxoYQiHoOAY4NJSqFKU8mcozbAh4x 9xQH6V8QEpPoxzwOJT9IDJcN0J9QWgQugkMKHP7/7mBksYeKV5LQ6Qw+vWAY7bdOsFmQ PT75zIIjRokuwIJ+c7QuWbXqljETGK068qePvVKDzl1ojnFeGggtk+2SCJTbZ6jC1sS5 Dztce9dQI9Qk9JNDV1+zRzZvGsGmBlIn352ykxKGOKPZ8mhrQrc+snnRL6Wl/HkbG3SW bWfw== X-Forwarded-Encrypted: i=1; AJvYcCWlKG2gsPORB0+8MagfHMReJ6f7T5EEGNyxdIKAK8LAWLNo2TkRuiVr1i7rs0g71ROSpE+olZR4jbHxKbILXiWWPJU= X-Gm-Message-State: AOJu0Yx+3r8F/45+7N2lGSwsxzIKjn82cA+lKCH48T/luPe6d9KuGut+ 4AmqGhPrxiUjsjNCU9CKm5hr0xJi8ZBOth+1+/xPQxs0QtnZVPluJI45cnwGNy97v0oiZFtWwnu dXgTme8e0BBoifA== X-Google-Smtp-Source: AGHT+IFmI9GK0405EKKeFUihI1V2MMV978wCRRUD+IF2RG+iw2g50ENG8068UenBApb8aTochDt6fXyIgyqbaIU= X-Received: from aliceryhl2.c.googlers.com ([fda3:e722:ac3:cc00:68:949d:c0a8:572]) (user=aliceryhl job=sendgmr) by 2002:a81:a0c6:0:b0:615:36af:1997 with SMTP id x189-20020a81a0c6000000b0061536af1997mr657526ywg.1.1712233930863; Thu, 04 Apr 2024 05:32:10 -0700 (PDT) Date: Thu, 04 Apr 2024 12:31:41 +0000 In-Reply-To: <20240404-alice-mm-v4-0-49a84242cf02@google.com> Mime-Version: 1.0 References: <20240404-alice-mm-v4-0-49a84242cf02@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=8776; i=aliceryhl@google.com; h=from:subject:message-id; bh=UiZpIf3BFNfRYXQgX1pbRbLKUYIHeSA1dkwNj3p46PY=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBmDp2+LY7YpSqabe2ORUZvWo4bVNMZVV5WG3R5Y mu1KHdru/OJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZg6dvgAKCRAEWL7uWMY5 Rg/hD/9idXpq7JAUwrsPqCMHE5ebfdee6X+qjWYNVVMCWAAHPuysFc6BuRDOAS3ksrtD6I64Hdx 0SzqKoLWlb5G84cPRR3FsBvS9LZFniXIOnGP89k0MzHz4LCLcgAzuSZR+8nWh1P8kikjfhh9XDM bIWO2k3E02cCC1VRTNmUWEv1dP7BrbQi6KTZWo9JrlqKuAaYH850u+ULTuThr4fL+Ht81FSKiw6 Z4pBcshiar3zqMhPz1UUyMk/oFXv0nGcmfkp+hRKzAgoWf6dEvxs+2IB/x1i7lm1DrhhIqb2U3F kjheC6V5Y8QcshSdG7ezsMEsLByCbEgrq2jQ3Hk3Hd48w869oYE0COHGY6cMHt5S619qGi0TV3r xnKuoInK826bagIN4wbiDkFKzfImcdzZ0+3AwdKjoUfrX8rQdwtBZ7N0N2/OQdNJnjJiGIUiJx/ q99e8A13V4pcSappEdzwx7wc6TzfPwDI95gpDV+6n4DF/I0WiPOVpdO8vYZhY3LQfoZqWopPS4N ruTmP8hGyabWfrAD0Ja82KS+WKqXuHFPQuuOnpqUQiYX+f68M4WJ/oTzC+Cpy+61hKIpC8Y0RVB ZCzUv+hAM1RIW280T6WS1qMu3Qj9oSrVLjpmlpGcavdbg3jbArrgOsMCpCFkVE3JRdu+mbqVW4S /wWNwuGpAnPGO1Q== X-Mailer: b4 0.13-dev-26615 Message-ID: <20240404-alice-mm-v4-3-49a84242cf02@google.com> Subject: [PATCH v4 3/4] rust: uaccess: add typed accessors for userspace pointers From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Al Viro , Andrew Morton , Kees Cook Cc: Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , " =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= " , Benno Lossin , Andreas Hindborg , Greg Kroah-Hartman , " =?utf-8?q?Arve_Hj=C3=B8?= =?utf-8?q?nnev=C3=A5g?= " , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Christian Brauner X-Rspamd-Queue-Id: B409B120020 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 1thn3wic6c99puhjm3phxdyyo6fp7f93 X-HE-Tag: 1712233931-1859 X-HE-Meta: U2FsdGVkX18Z5DNpikT9mz302QxmkMM2bkgMKlCF3sePEbzeDihq8ALLF0wPniEy1Bp/66QrDTN+eyJ+DD00Arvq1z+vNyIUcbM+M2d/fyXnGkXuh1pTdHCLdsVrCzd0wnUieb5X3PHsoAj0uD1UwDOkV8f3FN0eijbuYdXjuHc/qipXlq0EsBX6Xk0vw7RZKuBybeMwlyb0RxRdGacfAWOQP4JBdC9qhYBO866WBdpnb4y5XYu3fPtTmrCEjZRPOsKfyXeXc7UoNbPW6AfB/yzo6QWbQjqbtHyzy1ik/kI5v2rI4VFPiaOdJNY0kzCPn49vVtZLnzj4YU1ZxJ2lHVmorCAF3gEUaAuA7rBZE+srEn5RQ8X7gBLgkRAbgZ2WgxDWy/z3xXYm9dNhK6/t8N8k+YcC0bV11cbzU1Q1erjeKtMLuKUSMXAbv6QHC4UWlA5nquM8WR/Y+L2r7f1W01rZgfITDpJIVfg0zRzPbAHwduSnceXoJE1HbhuWlp/om0hAItNupk2iZ1WjxsfR0GMStC7MN3YAicGI2jeZiWlZcjL48FoTdmGCl/t1yBVacyBg6lmWv9osqDlyiuK/lURMm7ldFGnxjBgLFoRiWvr8eyjvhR4oOdkLu0LA9hTfMKO4YyDJ0WgooGG4esEtatMARCMcLV3SZqm007Xi5p45T6aO8rJbzSRwZmdZh2XZlHKHwhUFSv6lEM+l4R0ZLJu86JBgZjCDNB+3o9Fd2CVDNFKb60/7nGhJ7Cd+3ylIXDi8QQ74bICGL3jGxQ/AO5EqCHBHOB2sd5J4VpxdxZgfz10i2O3HYhXDQ7dRop54qu+lVYoZt794HuA+xpR8o9aYxR6eZo1AbRYLPufK5JW8uGW/1TAp+3yo+8Xuq9F2KW+Fs0iHevF7o7T8Ita0MF1iKPswAibpDGtgQ1bRYeliIBpGe4WMabv3TsVslqHLx0qfjCZb5L6zQAzAV+h goygwY6R +NL66bKRIzk2KjBAf3XrZJU4T4n624OeHoApy+SJAnSv8MdA/ec3agqXLWYb4xdlUVLsCjAi+Hxp8PtpTLT8X2/zhc00AhJHRtyQZTMHUt52hMdvVKCbtppqGpvtX8K1aa2BuwRpt9CuxIcJizV+owF8sKiaQbJrrIjS4bp4DrP6FwwpX9BXm5mzgLFGwmrsaIdEwsFWb60GChf2iPPKstFCiG5fa4vh/UyAIUm+gqBS9wVUG4mpijBVdLHN4nUZBiczFV0I4eU8NNowh+QNk7IkYOMvsPqN/AyTBBrSelMzk0nETdhFl/nK0Rl4OV/oQay31zgsWnVZiwxRlprD3c63jWkXpcIy4EPC8ePDgS2hCD2Ipl66WuW28lDbyT2WiK2+6kagVgLzZZ7+Q5ItZSN/K58qyTIh1iGFYUTbQqiUUuooKrJUfkjmrfl6+4oX10DOrMcmUpNmjhS6x5wGfPKKRbzp5ZYK1Hd5Gf4Cz8/2h/7DE0AyKtV5Y4AYUk+nTJ1FRjBBbe2RMGtLTUoUmrhCjjOgsyBhpdU+E4hnXUlSVqre5hahw0qXnCEXJTGvvv3VNyO7KvoeTxF/g0vv9oTqZF5lBi5ItssKjTqX7mQHW45+TeWQkHCc3cFiwAGZLZH3b7DfUQXFmNEf7EUeOh+nrogVY6oBXWZX2mpd+0OPZtswnu+g4XDw1V0mBgALfdNb6WQAMKzqFs165S2zbDfANSbZrsMWIsed9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add safe methods for reading and writing Rust values to and from userspace pointers. The C methods for copying to/from userspace use a function called `check_object_size` to verify that the kernel pointer is not dangling. However, this check is skipped when the length is a compile-time constant, with the assumption that such cases trivially have a correct kernel pointer. In this patch, we apply the same optimization to the typed accessors. For both methods, the size of the operation is known at compile time to be size_of of the type being read or written. Since the C side doesn't provide a variant that skips only this check, we create custom helpers for this purpose. The majority of reads and writes to userspace pointers in the Rust Binder driver uses these accessor methods. Benchmarking has found that skipping the `check_object_size` check makes a big difference for the cases being skipped here. (And that the check doesn't make a difference for the cases that use the raw read/write methods.) This code is based on something that was originally written by Wedson on the old rust branch. It was modified by Alice to skip the `check_object_size` check, and to update various comments, including the notes about kernel pointers in `WritableToBytes`. Co-developed-by: Wedson Almeida Filho Signed-off-by: Wedson Almeida Filho Reviewed-by: Benno Lossin Reviewed-by: Boqun Feng Signed-off-by: Alice Ryhl --- rust/kernel/types.rs | 67 ++++++++++++++++++++++++++++++++++++++++++++ rust/kernel/uaccess.rs | 76 ++++++++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 141 insertions(+), 2 deletions(-) diff --git a/rust/kernel/types.rs b/rust/kernel/types.rs index aa77bad9bce4..f72b82efdbfa 100644 --- a/rust/kernel/types.rs +++ b/rust/kernel/types.rs @@ -409,3 +409,70 @@ pub enum Either { /// Constructs an instance of [`Either`] containing a value of type `R`. Right(R), } + +/// Types for which any bit pattern is valid. +/// +/// Not all types are valid for all values. For example, a `bool` must be either +/// zero or one, so reading arbitrary bytes into something that contains a +/// `bool` is not okay. +/// +/// It's okay for the type to have padding, as initializing those bytes has no +/// effect. +/// +/// # Safety +/// +/// All bit-patterns must be valid for this type. +pub unsafe trait FromBytes {} + +// SAFETY: All bit patterns are acceptable values of the types below. +unsafe impl FromBytes for u8 {} +unsafe impl FromBytes for u16 {} +unsafe impl FromBytes for u32 {} +unsafe impl FromBytes for u64 {} +unsafe impl FromBytes for usize {} +unsafe impl FromBytes for i8 {} +unsafe impl FromBytes for i16 {} +unsafe impl FromBytes for i32 {} +unsafe impl FromBytes for i64 {} +unsafe impl FromBytes for isize {} +// SAFETY: If all bit patterns are acceptable for individual values in an array, +// then all bit patterns are also acceptable for arrays of that type. +unsafe impl FromBytes for [T] {} +unsafe impl FromBytes for [T; N] {} + +/// Types that can be viewed as an immutable slice of initialized bytes. +/// +/// If a struct implements this trait, then it is okay to copy it byte-for-byte +/// to userspace. This means that it should not have any padding, as padding +/// bytes are uninitialized. Reading uninitialized memory is not just undefined +/// behavior, it may even lead to leaking sensitive information on the stack to +/// userspace. +/// +/// The struct should also not hold kernel pointers, as kernel pointer addresses +/// are also considered sensitive. However, leaking kernel pointers is not +/// considered undefined behavior by Rust, so this is a correctness requirement, +/// but not a safety requirement. +/// +/// # Safety +/// +/// Values of this type may not contain any uninitialized bytes. +pub unsafe trait AsBytes {} + +// SAFETY: Instances of the following types have no uninitialized portions. +unsafe impl AsBytes for u8 {} +unsafe impl AsBytes for u16 {} +unsafe impl AsBytes for u32 {} +unsafe impl AsBytes for u64 {} +unsafe impl AsBytes for usize {} +unsafe impl AsBytes for i8 {} +unsafe impl AsBytes for i16 {} +unsafe impl AsBytes for i32 {} +unsafe impl AsBytes for i64 {} +unsafe impl AsBytes for isize {} +unsafe impl AsBytes for bool {} +unsafe impl AsBytes for char {} +unsafe impl AsBytes for str {} +// SAFETY: If individual values in an array have no uninitialized portions, then +// the array itself does not have any uninitialized portions either. +unsafe impl AsBytes for [T] {} +unsafe impl AsBytes for [T; N] {} diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs index 3f8ad4dc13c4..6566b8cc2541 100644 --- a/rust/kernel/uaccess.rs +++ b/rust/kernel/uaccess.rs @@ -4,10 +4,15 @@ //! //! C header: [`include/linux/uaccess.h`](srctree/include/linux/uaccess.h) -use crate::{bindings, error::code::*, error::Result}; +use crate::{ + bindings, + error::code::*, + error::Result, + types::{AsBytes, FromBytes}, +}; use alloc::vec::Vec; use core::ffi::{c_ulong, c_void}; -use core::mem::MaybeUninit; +use core::mem::{size_of, MaybeUninit}; /// A pointer to an area in userspace memory, which can be either read-only or /// read-write. @@ -246,6 +251,41 @@ pub fn read_slice(&mut self, out: &mut [u8]) -> Result { self.read_raw(out) } + /// Reads a value of the specified type. + /// + /// Fails with `EFAULT` if the read encounters a page fault. + pub fn read(&mut self) -> Result { + let len = size_of::(); + if len > self.length { + return Err(EFAULT); + } + let Ok(len_ulong) = c_ulong::try_from(len) else { + return Err(EFAULT); + }; + let mut out: MaybeUninit = MaybeUninit::uninit(); + // SAFETY: The local variable `out` is valid for writing `size_of::()` bytes. + // + // By using the _copy_from_user variant, we skip the check_object_size + // check that verifies the kernel pointer. This mirrors the logic on the + // C side that skips the check when the length is a compile-time + // constant. + let res = unsafe { + bindings::_copy_from_user(out.as_mut_ptr().cast::(), self.ptr, len_ulong) + }; + if res != 0 { + return Err(EFAULT); + } + // Since this is not a pointer to a valid object in our program, + // we cannot use `add`, which has C-style rules for defined + // behavior. + self.ptr = self.ptr.wrapping_byte_add(len); + self.length -= len; + // SAFETY: The read above has initialized all bytes in `out`, and since + // `T` implements `FromBytes`, any bit-pattern is a valid value for this + // type. + Ok(unsafe { out.assume_init() }) + } + /// Reads the entirety of the user slice, appending it to the end of the /// provided buffer. /// @@ -308,4 +348,36 @@ pub fn write_slice(&mut self, data: &[u8]) -> Result { self.length -= len; Ok(()) } + + /// Writes the provided Rust value to this userspace pointer. + /// + /// Fails with `EFAULT` if the write encounters a page fault. + pub fn write(&mut self, value: &T) -> Result { + let len = size_of::(); + if len > self.length { + return Err(EFAULT); + } + let Ok(len_ulong) = c_ulong::try_from(len) else { + return Err(EFAULT); + }; + // SAFETY: The reference points to a value of type `T`, so it is valid + // for reading `size_of::()` bytes. + // + // By using the _copy_to_user variant, we skip the check_object_size + // check that verifies the kernel pointer. This mirrors the logic on the + // C side that skips the check when the length is a compile-time + // constant. + let res = unsafe { + bindings::_copy_to_user(self.ptr, (value as *const T).cast::(), len_ulong) + }; + if res != 0 { + return Err(EFAULT); + } + // Since this is not a pointer to a valid object in our program, + // we cannot use `add`, which has C-style rules for defined + // behavior. + self.ptr = self.ptr.wrapping_byte_add(len); + self.length -= len; + Ok(()) + } } From patchwork Thu Apr 4 12:31:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alice Ryhl X-Patchwork-Id: 13617775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA143CD1284 for ; Thu, 4 Apr 2024 12:32:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E09916B009C; Thu, 4 Apr 2024 08:32:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DB8396B009E; Thu, 4 Apr 2024 08:32:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B947F6B009F; Thu, 4 Apr 2024 08:32:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 8F6EE6B009C for ; Thu, 4 Apr 2024 08:32:16 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5475F1C1176 for ; Thu, 4 Apr 2024 12:32:16 +0000 (UTC) X-FDA: 81971787072.24.8DABB6C Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf12.hostedemail.com (Postfix) with ESMTP id 5905240003 for ; Thu, 4 Apr 2024 12:32:14 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=nefkCrHA; spf=pass (imf12.hostedemail.com: domain of 3zZ0OZgkKCHoYjgacpwfjemmejc.amkjglsv-kkitYai.mpe@flex--aliceryhl.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3zZ0OZgkKCHoYjgacpwfjemmejc.amkjglsv-kkitYai.mpe@flex--aliceryhl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712233934; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=upT25yzbahblDd5bFzHBxFzV9brjNUZc4+hrC3EotZA=; b=jHnGpWv8o6NrYSkI8jrbHs9c0FLt2FeRZUsdSViDebGJAY/ffC2sufIRsRlqs4PyEkDv7S qqek3MiKECFrh6wAf5FEMV3h2PoZg5mY2c9p8qf22qdAO+lmj24LgGOWuC8Qk1t6V8u7Fd duE7Q7ip6Nn6DyGTwOU3bVJgh1XyYeQ= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=nefkCrHA; spf=pass (imf12.hostedemail.com: domain of 3zZ0OZgkKCHoYjgacpwfjemmejc.amkjglsv-kkitYai.mpe@flex--aliceryhl.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3zZ0OZgkKCHoYjgacpwfjemmejc.amkjglsv-kkitYai.mpe@flex--aliceryhl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712233934; a=rsa-sha256; cv=none; b=3g21qpC+5fhIJkdc+keUawYnozjzRyQ9hVN7gBmYoYFLzit9K9m7IpWEi6Ck8pewvu365C eg/cPdXpcfawYNi3OzdoEGc7JCj0WxYq6kr2K/K/wZlIx1h0ZJ7PYx2gI4AxrhFBslrBpL nAdu+dToA2qgC5wS5BNh2rvqWQESTsI= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-615032a8ce5so16676067b3.3 for ; Thu, 04 Apr 2024 05:32:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712233933; x=1712838733; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=upT25yzbahblDd5bFzHBxFzV9brjNUZc4+hrC3EotZA=; b=nefkCrHA7IPkdN5V5gFevA8Y9nHNxVxi0VJ7g7c4BR/FK9Vry+6LUJsAGnbP3IaKCn /VZ2cG3XYVdqk9iBGInUvEJh/Gy+/M4KEmHPyMS72cECaotRyeZnxA49G41ZefXosrJc 0T3yv7fgANJjg5eFiPEsSIhONu+6ogNCctUlaiYMu5J2eo8ilonzm1ctsIZpzx7KyLqX OxSV3/FA6i8th3waWec3K67khU+Iip4SjwGqEpr9m0ZAMTc5YKqa0eg7r3A3Xh1lPf2Z lTXVITCF2ssqAU0Ki31je35kqlk0aJgvHCfC1IPNHtkaQnyKlVBHMVohpza0xl2GJTZZ GRHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712233933; x=1712838733; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=upT25yzbahblDd5bFzHBxFzV9brjNUZc4+hrC3EotZA=; b=siHeiEVQTp3hLgDbGwlKYXy8oGKLLRGRUsDVN8SozDGij1yqwMuFIRAaId7fcUHwpe 6NWLE/Z+vo2zubhoGc80bOO7R9q/FTgKRLoHniOx+nFqVBUDQ3EuTncb2xX2jF1dV5Ui Ge/Mu0cHduoYOJfoCxLuds3ktdlI0uD/yYnNr8v1a0rr2R32gZcTXIyIoOAkgTjBpr6U mZ6B6rrtf4CofdLzHED3qmjDR28NR/gJeNJtwb7iamt2tDD4nOhVreKcx3XbuK4rA8o4 fhhXL6gIkXqYgDflxttzJ+yNULNh7WjN0ejdq545QRPMU4+GLRGhBkgcNDbO/2TuwGUC nY2A== X-Forwarded-Encrypted: i=1; AJvYcCVHDUcPalq1oTj4hm13fqd+R/GMtZqrg4OjEugZrD8tu8ND6ajZKsgca7A/GT2ip25LKLxyxyN+2u6oV3/Fa1WRs4I= X-Gm-Message-State: AOJu0YyhsPjsurvfGQIYOTiz8QQG4ENowOXdeAwJT16txbywSKVDlhJo PHUxyidnbVsti7qfLX6D8BrrORRDyB2uBCqouCIAZjoYMN9AO1eNhNQUqMv7Os0SiLWlVeIfsbi ksadR6Z49knBlpg== X-Google-Smtp-Source: AGHT+IGmCdSsDsX60IrhBcrsWkYfjETKHqOUwZqc/QWEbCWW/bS5bF3ZT3NOSDpSvaF+vXmL7+qspwRE95eAVCA= X-Received: from aliceryhl2.c.googlers.com ([fda3:e722:ac3:cc00:68:949d:c0a8:572]) (user=aliceryhl job=sendgmr) by 2002:a05:690c:6812:b0:615:1471:19bb with SMTP id id18-20020a05690c681200b00615147119bbmr665073ywb.2.1712233933464; Thu, 04 Apr 2024 05:32:13 -0700 (PDT) Date: Thu, 04 Apr 2024 12:31:42 +0000 In-Reply-To: <20240404-alice-mm-v4-0-49a84242cf02@google.com> Mime-Version: 1.0 References: <20240404-alice-mm-v4-0-49a84242cf02@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=15829; i=aliceryhl@google.com; h=from:subject:message-id; bh=DIHPCmsYsAEVmOzNFnwpXPapPHFZB1wBN8+xcv/OdvM=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBmDp2/hsWUpMiZeeco043Qxmtop18YTPBPcVh9U CxK+y2Do5OJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZg6dvwAKCRAEWL7uWMY5 RifMEACyrHbIuGxt75YdUBSQvrkN8yXNTT7lnKivUKsR0r6DrW6Tl08hjHyhClDbusEsDriRzI+ d324J7f699OcOryWCiUFzitCbxEvZ6pFGTHyFRHGFaniXWzXygA/Jd75BmmmH7+SOa/9jc2+32m Oj3A/TRdA5yJVFNdMxwB7oNqLEJOfXjZICWKnneD4jhAuQxSUmF1LM9IgUTrKvsMAsXURvV04zW IrLuIJTDHXOl+CSAoEC/EolKvtEnXRKoxfXLWeIPYjAOsPsSGgShwxZiKs6HOhQiEZTQ9YAgTzm YVH4p20Gr/0FsrggBPA3Rd+j+Ii2jWz0brK3m09kK/+AdQGFLXNFV6sSSZx650VCzDRpyZA3KEs cfBDFU84W9fdsooddDPPaqBN624FJgo9TXNkpqdDB+m4Tr+gHhddRYZvhbKvsythoLHCq7R9RSD td3ieUcYDmXGEe6/gQ2w1z79VQEOUem0Db25rekEf1MaRqO+wOh/+ADbJQJ5uQWL8L4OCdNjuvW M1JODMrwUHwltFxDvUvrkqSXQdNdHOrxWlD/0sRfKVj5IKuRSRunY4g4eQqe42BEVf1jbdeY5+K j4zmVuRIc/8/lRm9QYi68TIdXE7xYPQUOFoYNb9Aid5Kg8GMXVxFEFyx10CX1Fyxp8hnVrSKjNn mM1hxc6os3+IeGg== X-Mailer: b4 0.13-dev-26615 Message-ID: <20240404-alice-mm-v4-4-49a84242cf02@google.com> Subject: [PATCH v4 4/4] rust: add abstraction for `struct page` From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Al Viro , Andrew Morton , Kees Cook Cc: Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , " =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= " , Benno Lossin , Andreas Hindborg , Greg Kroah-Hartman , " =?utf-8?q?Arve_Hj=C3=B8?= =?utf-8?q?nnev=C3=A5g?= " , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Christian Brauner X-Rspamd-Queue-Id: 5905240003 X-Rspam-User: X-Stat-Signature: xipg4iz9qqmrne5qt7qng75658jfzpsd X-Rspamd-Server: rspam01 X-HE-Tag: 1712233934-205383 X-HE-Meta: U2FsdGVkX1+11oNTzoxAUtydzGy8Qp4hQUSykmKlMBwmrd0oXYkbbP+SCNEMpatO/86gd+8vHhbLifECSrdB3O1fMeFvI6+v8A59Kweue2lWpM9WbC4tbFMNfm+driNOCy7FE4K5m7QA+8KfFUrMnvdCuf28D2Qj3TUjKRx4X3cmgT8VQP2AerbK9tLu/tRrkGawCtIlTEdKC3k37ps97OXpoB1HFxOmnZUff3GDUSCT1XvGGwGXyf8DzSQEQM59Q4VxCTwmseNYVzHwkaz0s2gxTTsFfU4Ednw2XFjZG4ge/CRqihmh6CfANXaerLpdIweRXflwr6Jp+UL8420RT2XUO8ZcjVEYSRGLVzlcGMyqpLArZqdPR97kG/uA8u24X6a8hhU9rjVpgPbK58Qii85T49+rinECUwlnDN4Oc2mJMQdTJJo8GcgCMD1bF1RwWx2vfB7heWRlWFYCWObNQD4Ai1s+v+J5SvIT2FhnIRUdpkMG6o4gwEdr2xYzsIRWJnsEwJQtfTJ96ZRbcf4lOb8cHyr4EDmH+Dzgfq9PdwbSm1SzXhPNfODriTEurLWqq136TiBghgQiAVkzhRIn/7poOhBJjuphG0kytJDBudXg5BeKnKnNTJHhColXHDoGv9A0jLtzS+RgK2yA9RCSlFb3FJx+V8IexYrErDmW46zSEItY4xgk49ae04eMwpJDTRHVAdP088d5Psg2Mx7+B6+ds+ovoGbGsNtipuZW3tvHUG8x8bdDOgMAkJsbap6AmyJk3xr/D7oq3fnfV2CMEY7cNjA1I47uDVUn+BXN/Gwj7TFN+JN69mDJ+edB0Ak/vnfpOth7Rk2Qj05NtXbqbkz2UnsUXKUjagfUrB8yVl7yqKTNNFJIKChZKq8OMuZ1lCqZofOkh1AeRtJSoyPFh1BPMpxFkRv0RrA8Q3rOtZqI+jz6EXFm50XGuTFpSC+JIDJGy/L3EdORiuAAlTv yLG0wE3S aVFreicC6dTSrhSkam/GZi1Bar791cb3/1UmBnRJcRvsPN0okdJ6sKlPJ0KPbtLmSqEY+palBab2Io1forkfMJRUWTzqlPjZak9AwG+3glmAeG0C4tptPod2ln1SLwdeRer+rButkAp+o4cjXaulcMyOqEz5cwdFoxaUeiksSgU277qIVzIQgbh6b/XCEYv3wPlSYdxmbTHT/bsh/JKNpkOA/hsW2MokuuX63WV4QtswIgpLF6AG28ZV+jguX7eGDv/UkzV0vh9r9HfBQOBaobVflFtZEA6SjOOsyvKvJR1Hjd9GT2DQ1/fJ0y4e/WTijU6C9r6fXQhidgNZ0Gh71/fRA2v7Z3+4vz5+4ONjrGBzf7LcjLqzJgxX6+K5tygJMehlQ353dKL9R+6BZ8JsCdLaFBlOqs6O+R85zgpl1v8x2AN8Ez/1kbGQa5wD+VQSdH6ICz/hw6S4jIrncTdxIoVgyK1FJtkqiA1/UgpPNl1fYxfwE3zfziKvcOlTV7SCgZ5+zA2FkrRtX/hVGVJg9g7htMYLFQjPQzI5WaiwaBJ4Vqki82Y9GDdP5i37bYwendsKjBMnkIBcc4XbD84KiiRVg/ZzvBzSGpVYb7yffInC5goe2OFhj43fJViYe4UrBDQG7yXXpnz96gXrxGVx2TZ/v/+n43yvHxlU+hSv5bJAAt8nzQcvWYYPShU97iJR8AaLsJsrdBb3gcYUST1Xtmpcvs9P7nRk35M4F X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Adds a new struct called `Page` that wraps a pointer to `struct page`. This struct is assumed to hold ownership over the page, so that Rust code can allocate and manage pages directly. The page type has various methods for reading and writing into the page. These methods will temporarily map the page to allow the operation. All of these methods use a helper that takes an offset and length, performs bounds checks, and returns a pointer to the given offset in the page. This patch only adds support for pages of order zero, as that is all Rust Binder needs. However, it is written to make it easy to add support for higher-order pages in the future. To do that, you would add a const generic parameter to `Page` that specifies the order. Most of the methods do not need to be adjusted, as the logic for dealing with mapping multiple pages at once can be isolated to just the `with_pointer_into_page` method. Finally, the struct can be renamed to `Pages`, and the type alias `Page = Pages<0>` can be introduced. Rust Binder needs to manage pages directly as that is how transactions are delivered: Each process has an mmap'd region for incoming transactions. When an incoming transaction arrives, the Binder driver will choose a region in the mmap, allocate and map the relevant pages manually, and copy the incoming transaction directly into the page. This architecture allows the driver to copy transactions directly from the address space of one process to another, without an intermediate copy to a kernel buffer. This code is based on Wedson's page abstractions from the old rust branch, but it has been modified by Alice by removing the incomplete support for higher-order pages, by introducing the `with_*` helpers to consolidate the bounds checking logic into a single place, and by introducing gfp flags. Co-developed-by: Wedson Almeida Filho Signed-off-by: Wedson Almeida Filho Signed-off-by: Alice Ryhl --- rust/bindings/bindings_helper.h | 2 + rust/helpers.c | 20 ++++ rust/kernel/lib.rs | 1 + rust/kernel/page.rs | 259 ++++++++++++++++++++++++++++++++++++++++ 4 files changed, 282 insertions(+) diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h index 65b98831b975..da1e97871419 100644 --- a/rust/bindings/bindings_helper.h +++ b/rust/bindings/bindings_helper.h @@ -20,5 +20,7 @@ /* `bindgen` gets confused at certain things. */ const size_t RUST_CONST_HELPER_ARCH_SLAB_MINALIGN = ARCH_SLAB_MINALIGN; +const size_t RUST_CONST_HELPER_PAGE_SIZE = PAGE_SIZE; const gfp_t RUST_CONST_HELPER_GFP_KERNEL = GFP_KERNEL; const gfp_t RUST_CONST_HELPER___GFP_ZERO = __GFP_ZERO; +const gfp_t RUST_CONST_HELPER___GFP_HIGHMEM = ___GFP_HIGHMEM; diff --git a/rust/helpers.c b/rust/helpers.c index 312b6fcb49d5..72361003ba91 100644 --- a/rust/helpers.c +++ b/rust/helpers.c @@ -25,6 +25,8 @@ #include #include #include +#include +#include #include #include #include @@ -93,6 +95,24 @@ int rust_helper_signal_pending(struct task_struct *t) } EXPORT_SYMBOL_GPL(rust_helper_signal_pending); +struct page *rust_helper_alloc_pages(gfp_t gfp_mask, unsigned int order) +{ + return alloc_pages(gfp_mask, order); +} +EXPORT_SYMBOL_GPL(rust_helper_alloc_pages); + +void *rust_helper_kmap_local_page(struct page *page) +{ + return kmap_local_page(page); +} +EXPORT_SYMBOL_GPL(rust_helper_kmap_local_page); + +void rust_helper_kunmap_local(const void *addr) +{ + kunmap_local(addr); +} +EXPORT_SYMBOL_GPL(rust_helper_kunmap_local); + refcount_t rust_helper_REFCOUNT_INIT(int n) { return (refcount_t)REFCOUNT_INIT(n); diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs index 37f84223b83f..667fc67fa24f 100644 --- a/rust/kernel/lib.rs +++ b/rust/kernel/lib.rs @@ -39,6 +39,7 @@ pub mod kunit; #[cfg(CONFIG_NET)] pub mod net; +pub mod page; pub mod prelude; pub mod print; mod static_assert; diff --git a/rust/kernel/page.rs b/rust/kernel/page.rs new file mode 100644 index 000000000000..5aba0261242d --- /dev/null +++ b/rust/kernel/page.rs @@ -0,0 +1,259 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Kernel page allocation and management. + +use crate::{bindings, error::code::*, error::Result, uaccess::UserSliceReader}; +use core::{ + alloc::AllocError, + ptr::{self, NonNull}, +}; + +/// A bitwise shift for the page size. +#[allow(clippy::unnecessary_cast)] +pub const PAGE_SHIFT: usize = bindings::PAGE_SHIFT as usize; + +/// The number of bytes in a page. +#[allow(clippy::unnecessary_cast)] +pub const PAGE_SIZE: usize = bindings::PAGE_SIZE as usize; + +/// A bitmask that gives the page containing a given address. +pub const PAGE_MASK: usize = !(PAGE_SIZE-1); + +/// Flags for the "get free page" function that underlies all memory allocations. +pub mod flags { + /// gfp flags. + #[allow(non_camel_case_types)] + pub type gfp_t = bindings::gfp_t; + + /// `GFP_KERNEL` is typical for kernel-internal allocations. The caller requires `ZONE_NORMAL` + /// or a lower zone for direct access but can direct reclaim. + pub const GFP_KERNEL: gfp_t = bindings::GFP_KERNEL; + /// `GFP_ZERO` returns a zeroed page on success. + pub const __GFP_ZERO: gfp_t = bindings::__GFP_ZERO; + /// `GFP_HIGHMEM` indicates that the allocated memory may be located in high memory. + pub const __GFP_HIGHMEM: gfp_t = bindings::__GFP_HIGHMEM; +} + +/// A pointer to a page that owns the page allocation. +/// +/// # Invariants +/// +/// The pointer is valid, and has ownership over the page. +pub struct Page { + page: NonNull, +} + +// SAFETY: Pages have no logic that relies on them staying on a given thread, so +// moving them across threads is safe. +unsafe impl Send for Page {} + +// SAFETY: Pages have no logic that relies on them not being accessed +// concurrently, so accessing them concurrently is safe. +unsafe impl Sync for Page {} + +impl Page { + /// Allocates a new page. + pub fn alloc_page(gfp_flags: flags::gfp_t) -> Result { + // SAFETY: Depending on the value of `gfp_flags`, this call may sleep. + // Other than that, it is always safe to call this method. + let page = unsafe { bindings::alloc_pages(gfp_flags, 0) }; + let page = NonNull::new(page).ok_or(AllocError)?; + // INVARIANT: We just successfully allocated a page, so we now have + // ownership of the newly allocated page. We transfer that ownership to + // the new `Page` object. + Ok(Self { page }) + } + + /// Returns a raw pointer to the page. + pub fn as_ptr(&self) -> *mut bindings::page { + self.page.as_ptr() + } + + /// Runs a piece of code with this page mapped to an address. + /// + /// The page is unmapped when this call returns. + /// + /// # Using the raw pointer + /// + /// It is up to the caller to use the provided raw pointer correctly. The + /// pointer is valid for `PAGE_SIZE` bytes and for the duration in which the + /// closure is called. The pointer might only be mapped on the current + /// thread, and when that is the case, dereferencing it on other threads is + /// UB. Other than that, the usual rules for dereferencing a raw pointer + /// apply: don't cause data races, the memory may be uninitialized, and so + /// on. + /// + /// If multiple threads map the same page at the same time, then they may + /// reference with different addresses. However, even if the addresses are + /// different, the underlying memory is still the same for these purposes + /// (e.g., it's still a data race if they both write to the same underlying + /// byte at the same time). + fn with_page_mapped(&self, f: impl FnOnce(*mut u8) -> T) -> T { + // SAFETY: `page` is valid due to the type invariants on `Page`. + let mapped_addr = unsafe { bindings::kmap_local_page(self.as_ptr()) }; + + let res = f(mapped_addr.cast()); + + // This unmaps the page mapped above. + // + // SAFETY: Since this API takes the user code as a closure, it can only + // be used in a manner where the pages are unmapped in reverse order. + // This is as required by `kunmap_local`. + // + // In other words, if this call to `kunmap_local` happens when a + // different page should be unmapped first, then there must necessarily + // be a call to `kmap_local_page` other than the call just above in + // `with_page_mapped` that made that possible. In this case, it is the + // unsafe block that wraps that other call that is incorrect. + unsafe { bindings::kunmap_local(mapped_addr) }; + + res + } + + /// Runs a piece of code with a raw pointer to a slice of this page, with + /// bounds checking. + /// + /// If `f` is called, then it will be called with a pointer that points at + /// `off` bytes into the page, and the pointer will be valid for at least + /// `len` bytes. The pointer is only valid on this task, as this method uses + /// a local mapping. + /// + /// If `off` and `len` refers to a region outside of this page, then this + /// method returns `EINVAL` and does not call `f`. + /// + /// # Using the raw pointer + /// + /// It is up to the caller to use the provided raw pointer correctly. The + /// pointer is valid for `len` bytes and for the duration in which the + /// closure is called. The pointer might only be mapped on the current + /// thread, and when that is the case, dereferencing it on other threads is + /// UB. Other than that, the usual rules for dereferencing a raw pointer + /// apply: don't cause data races, the memory may be uninitialized, and so + /// on. + /// + /// If multiple threads map the same page at the same time, then they may + /// reference with different addresses. However, even if the addresses are + /// different, the underlying memory is still the same for these purposes + /// (e.g., it's still a data race if they both write to the same underlying + /// byte at the same time). + fn with_pointer_into_page( + &self, + off: usize, + len: usize, + f: impl FnOnce(*mut u8) -> Result, + ) -> Result { + let bounds_ok = off <= PAGE_SIZE && len <= PAGE_SIZE && (off + len) <= PAGE_SIZE; + + if bounds_ok { + self.with_page_mapped(move |page_addr| { + // SAFETY: The `off` integer is at most `PAGE_SIZE`, so this + // pointer offset will result in a pointer that is in bounds or + // one off the end of the page. + f(unsafe { page_addr.add(off) }) + }) + } else { + Err(EINVAL) + } + } + + /// Maps the page and reads from it into the given buffer. + /// + /// This method will perform bounds checks on the page offset. If `offset .. + /// offset+len` goes outside ot the page, then this call returns `EINVAL`. + /// + /// # Safety + /// + /// * Callers must ensure that `dst` is valid for writing `len` bytes. + /// * Callers must ensure that this call does not race with a write to the + /// same page that overlaps with this read. + pub unsafe fn read_raw(&self, dst: *mut u8, offset: usize, len: usize) -> Result { + self.with_pointer_into_page(offset, len, move |src| { + // SAFETY: If `with_pointer_into_page` calls into this closure, then + // it has performed a bounds check and guarantees that `src` is + // valid for `len` bytes. + // + // There caller guarantees that there is no data race. + unsafe { ptr::copy_nonoverlapping(src, dst, len) }; + Ok(()) + }) + } + + /// Maps the page and writes into it from the given buffer. + /// + /// This method will perform bounds checks on the page offset. If `offset .. + /// offset+len` goes outside ot the page, then this call returns `EINVAL`. + /// + /// # Safety + /// + /// * Callers must ensure that `src` is valid for reading `len` bytes. + /// * Callers must ensure that this call does not race with a read or write + /// to the same page that overlaps with this write. + pub unsafe fn write_raw(&self, src: *const u8, offset: usize, len: usize) -> Result { + self.with_pointer_into_page(offset, len, move |dst| { + // SAFETY: If `with_pointer_into_page` calls into this closure, then + // it has performed a bounds check and guarantees that `dst` is + // valid for `len` bytes. + // + // There caller guarantees that there is no data race. + unsafe { ptr::copy_nonoverlapping(src, dst, len) }; + Ok(()) + }) + } + + /// Maps the page and zeroes the given slice. + /// + /// This method will perform bounds checks on the page offset. If `offset .. + /// offset+len` goes outside ot the page, then this call returns `EINVAL`. + /// + /// # Safety + /// + /// Callers must ensure that this call does not race with a read or write to + /// the same page that overlaps with this write. + pub unsafe fn fill_zero(&self, offset: usize, len: usize) -> Result { + self.with_pointer_into_page(offset, len, move |dst| { + // SAFETY: If `with_pointer_into_page` calls into this closure, then + // it has performed a bounds check and guarantees that `dst` is + // valid for `len` bytes. + // + // There caller guarantees that there is no data race. + unsafe { ptr::write_bytes(dst, 0u8, len) }; + Ok(()) + }) + } + + /// Copies data from userspace into this page. + /// + /// This method will perform bounds checks on the page offset. If `offset .. + /// offset+len` goes outside ot the page, then this call returns `EINVAL`. + /// + /// Like the other `UserSliceReader` methods, data races are allowed on the + /// userspace address. However, they are not allowed on the page you are + /// copying into. + /// + /// # Safety + /// + /// Callers must ensure that this call does not race with a read or write to + /// the same page that overlaps with this write. + pub unsafe fn copy_from_user_slice( + &self, + reader: &mut UserSliceReader, + offset: usize, + len: usize, + ) -> Result { + self.with_pointer_into_page(offset, len, move |dst| { + // SAFETY: If `with_pointer_into_page` calls into this closure, then + // it has performed a bounds check and guarantees that `dst` is + // valid for `len` bytes. Furthermore, we have exclusive access to + // the slice since the caller guarantees that there are no races. + reader.read_raw(unsafe { core::slice::from_raw_parts_mut(dst.cast(), len) }) + }) + } +} + +impl Drop for Page { + fn drop(&mut self) { + // SAFETY: By the type invariants, we have ownership of the page and can + // free it. + unsafe { bindings::__free_pages(self.page.as_ptr(), 0) }; + } +}