From patchwork Thu Apr 18 08:59:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alice Ryhl X-Patchwork-Id: 13634366 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAF65C04FFF for ; Thu, 18 Apr 2024 08:59:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0A2E46B0099; Thu, 18 Apr 2024 04:59:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 02DDA6B009A; Thu, 18 Apr 2024 04:59:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D975D6B009B; Thu, 18 Apr 2024 04:59:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A9A476B0099 for ; Thu, 18 Apr 2024 04:59:45 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 2155A41380 for ; Thu, 18 Apr 2024 08:59:45 +0000 (UTC) X-FDA: 82022054730.10.CC2A6FD Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf24.hostedemail.com (Postfix) with ESMTP id 2DB6E18001A for ; Thu, 18 Apr 2024 08:59:42 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=odvTj+2n; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 3_uAgZgkKCHsZkhbdqxgkfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--aliceryhl.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3_uAgZgkKCHsZkhbdqxgkfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--aliceryhl.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713430783; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2i7cIGrycSQYWPlCCdINwh9/UQnjF+IIqZAJpSGmhgU=; b=LcZuiAfDZICCHWVh16HVGfcvjopEkiCwjFhTgITipFvVYCedrVTwX54NlGs0IitI0orikX RNuryxp+hVdKGMDSgvLQeadw4Nuvgjw/HCyDK+kdaQSCEx0iR2cAxCVppvREhQdTByu16W 85bBOcVspPdEkR3RvT9GQIAUkya9dFw= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=odvTj+2n; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 3_uAgZgkKCHsZkhbdqxgkfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--aliceryhl.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3_uAgZgkKCHsZkhbdqxgkfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--aliceryhl.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713430783; a=rsa-sha256; cv=none; b=t77RSHKMOKgitd9xDtun+oGJBBdcV7OqYZWzbOSu3TFLT0/f0dNuP0IxtwphNQsxeSrZ8a nGDgGUt7nT7ZnmVJQLgKfQZsdQZOs5feCYO0GPSobC6NAWS0lE+5g2XROzy3oM9TuQ6f9X ir+iqq5P856QyaY3DAfhtxxX8iouqZY= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-de46e8d5418so326752276.1 for ; Thu, 18 Apr 2024 01:59:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713430782; x=1714035582; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2i7cIGrycSQYWPlCCdINwh9/UQnjF+IIqZAJpSGmhgU=; b=odvTj+2nNmzzJgtSVDIU8N9cbpEDa4T9qW6+zYOw11DLZLnON0YPmOVIo9/zCVvUEq +ayDx0SN3ZtsKl95DRg+g2nj6aUR+2z5Sd0ffDmFNH2MA6ZIM/sjYKEtqSaWVQZ76lZK 6v9Nv5ITlDIt0Tl0t3agZtnC/q7IByPG8LLaANMg0jHgk4pFjiSNslb331gfxWzavltT oOC9pdLdV+TkzfwmcjtEYBaCKWKac4zjvArkmVsekbl2/+jFcd/mCL5F7u/dWrZRnh1B N7zv0cCZDuBQ5E44PcV5XVRoh0gosPbCccDTS30kyuKjaB+BOlUFl2APdIo/k3wxOqnX XjFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713430782; x=1714035582; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2i7cIGrycSQYWPlCCdINwh9/UQnjF+IIqZAJpSGmhgU=; b=IjEy4LinBGl7JmVfYxWKF1s1lGdNv1qmPHHqrx39gZydfUv+lKnVLVnx//0X5cFQkh 6cEglBxJcxCbRCHmQLC9UtUb/JHQX6KLWMYsUPWbJa9bVQDfm9TTqZ3o1rn+25yxH8Jt JkzE2JkcRXoQdxDCm0ZKxZDlTl+tMnCWwpN/mU3oi6BydMAS3fRMtj+vaz6WzL+2t9cR 9H+APbLU2nMwc6yFAkje+KnZMseHVqcl/4bSV6GAGOpb+UTi8ijOi5L3libE8DEBLUip O4vZ68AsgC3SHsN3VFmFkPSIX7UgtbWKGS+MuiXFf/hAazkXW4YyKq6ruNq+M4z9Hr4D z62g== X-Forwarded-Encrypted: i=1; AJvYcCWD94l21uYvUlPcPS9xik44yqU2AbQF40yPnytmNKKpLMMmkh84JXUvt9xotcr1aj3MVElNJyYncKECKYJrLGvEz78= X-Gm-Message-State: AOJu0YwPeQG8V1sPHFPFwgI6hTMtALpDQhEG39glbwrccZfLqfX79ddX ncE/+DTrzmYNoRQi5KXdhZ5+1MaZOuyUpMSuo7AG/0GwSo6Kj9sGc1OA95w3q/5pNR7wHzq3HO5 psDIYEgBM5RUUMQ== X-Google-Smtp-Source: AGHT+IG+jY4+0ofbPAOxP4aGGV+IDNpzgvLMZWhsS1WIS60BQrHa6xxYtx5GVync8b3M5DHqH+EipzY/b+2qzBU= X-Received: from aliceryhl2.c.googlers.com ([fda3:e722:ac3:cc00:68:949d:c0a8:572]) (user=aliceryhl job=sendgmr) by 2002:a05:6902:18cd:b0:de4:67d9:a2c6 with SMTP id ck13-20020a05690218cd00b00de467d9a2c6mr89467ybb.2.1713430782225; Thu, 18 Apr 2024 01:59:42 -0700 (PDT) Date: Thu, 18 Apr 2024 08:59:20 +0000 In-Reply-To: <20240418-alice-mm-v6-0-cb8f3e5d688f@google.com> Mime-Version: 1.0 References: <20240418-alice-mm-v6-0-cb8f3e5d688f@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=16029; i=aliceryhl@google.com; h=from:subject:message-id; bh=af0xmkxz5WwJ2caWqC4SIR0mhucvRiEoxb05tr7yw14=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBmIODvBTN1EJk4q9WLN6UPNG6sSesov6EsZDtRE MaqZfGehSuJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZiDg7wAKCRAEWL7uWMY5 Rq/SD/9YqfoAjeuS9VklwHUSs1rn5vmzmFezb8JzHZ111jayAEZ1/AuITRgkAPPp+EY/GXWC9sR qSazhLjpMbJlQQVgAskubdzy63Ep6bsr/3Gwb9EnhVo+cZxg3JA7DP9y/dtrBBTcnwuSCxz9Q9N E4wScxVzeklnykkmwNHN4qiZ//mdG6b0FG3phZOIq+WAvWdhHxkQIxdODCWt6qPSvPntIj3+PPM 8fekPfWDKPdRmER+eWXAPQBfsKHdzBrRhj7frTBdaUELnd6evJrnduAtglFnqu2znIaRwomBHsW hMvEDO63+If+0T7pkWfEzrj0i4DPn7n4YaIMQwDYWdoxOWGiS/TUI3sRHFxGSL+7ooBt9SG7BRg j4oYk88EmNiiNR0Jad2E5UJj7lX2yiP/T+JWhFjBEKpN5FMs8Nrt7KfN4x0JWKvE5XpnaQoRYla rcqp0woeAlAzyGbo02t9PIb4TDQUwxTzsGgeMSxVq7izTevsrCCuj4LI6wbHCiSM2KIoSugK42l Pu1NpgPsiw9Ys2RWtLhiZ99b38voiTCrUVSfeKbfDKUdsLsZisaPcs8OTQFOq0FoyOULZCYKDqT I3b8KMNX9i0EfWQFLFCC7OpsoJBPfOH3RBtrOcEEcQ48hlvPvzHQyqAyLw6sm+nme1Ul+CPeo+T 7G1JpJbpOBZnLEw== X-Mailer: b4 0.13-dev-26615 Message-ID: <20240418-alice-mm-v6-4-cb8f3e5d688f@google.com> Subject: [PATCH v6 4/4] rust: add abstraction for `struct page` From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Al Viro , Andrew Morton , Kees Cook Cc: Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , " =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= " , Benno Lossin , Andreas Hindborg , Greg Kroah-Hartman , " =?utf-8?q?Arve_Hj=C3=B8?= =?utf-8?q?nnev=C3=A5g?= " , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , Trevor Gross , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Christian Brauner X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 2DB6E18001A X-Rspam-User: X-Stat-Signature: qk4ap5aya4mjr453oazdpha5qdx5kg4d X-HE-Tag: 1713430782-163175 X-HE-Meta: U2FsdGVkX19X2jN187aSHsXgs9qib92Jwo3m2z76z8ht8mGp/CMv7vla+153Io7BBA5QpMIdk8jkpUdfbqIUYseiFG8J5cqQ3qZKb4fOwZ5N9sKKvLvhw+11P2y+YAsDtmx7/i6/pEWyByVDs8ufJ2TpsIA/rXI6cMrmAjkRaOSacoZccmyxxgtyznBPwj7uuHJc9+M45+NrY07iVOTP1pQ+XSvYGpXju+MBi2qnHQuLotPLcjj2t187SiFjAXoFg5M8wIqXefWSPOK1ZWJzIvrej3VU6Cj9FN7RXpZcWIOs2iZRj9cfdi8yNTPILt8JatT6txO1LxnArfk50jayJtzh+bJoUNGXJdIEbo6IJXqrE4vW0DCkK77s195eZBGz3zF6coI/9i6E3Lmau08agD7h3C7NBZyccbMtL5A5e68bZuy4bUBh1XRNBwRJRPoj7U4zq96ifZDR3aGDQsPCJtqicJp6so8eKfn40CO3sb+M0ySMAiIw7TwnMLPN2/UkuHjJnBOhs18uLDtJeIAm67G0xMYZfk77RfTduQhqo7cRPeLsBQgUTHcqxLbBpqJEus3uy22pkF3X3J+SHuGt1g8meDjbLYwMZidhfDKXSmmQ6uum1fasyjCeR/UyUPeFWv0pcrKENzh4i0oXACPvUSmYXD/ENH4vZhlPV8scT1fBstUhWVFPCBt9rkvx3ZLku9u8HqPK/e2ET7JerttyUMcom6NvPuecgHJN7WcbsGSuD6ooB3JpHrkUaEF+DSgats4JteAGRwCl8mqLHs1mGmtwUwNw7ze9NQx1QrlUFvLeq/ug0lK437iETwB6efhfvlzMR1+c11ld+9aCiQWRpeJePnspMgvTmoNW5Tnhd6R1tWDw06f6clPjdHp1dl7BwOwcdNPWZkOUVCrRof7qNUuRSyPsv3zycGwL2HtUd44eC7ZUjNdc7VL+OoEFq4371WmYzjFj9UmFhmsmp28 T9XBEnqE gpSzlklAT4y1Eqi3jA1WN968T+CdHF0oDDu9i7dKTbUMKQu1SrPZSutjdHgmH7Yqb2h6ygsQ8JktrmEgMkHK5/YE0z7eGbnsiB2tzxihHQpmUHtsj0t1nFbxbbOzUmRp5nh14xF4G8LdehOyKNagQh18rFU0Qqo0E0vaguJLZRiIAnAW9czjc3qvulj/tIMiGBRjhR1pnuVI1ngWp6J22Be2smY86F3uRMnw+wIBZenBp6tK2qxh8EyjRj4ha92K/aAI3ACTnnL2RO1CzzW3estXgB4cllOQR5i5DNiIjiLpBADOEUllzbm7BCFwAVij5fgQ4SJPAi6aTAYziOXTZlQmiFnz93XTvnalb35yyK1BtvSP7l0y++3Yb0CCz1jkVHv9Fr/CgIq8lt3DzRWs+t4BWp2GsR5iWrsMzm3gg83NCCuRMeQ55YTw1Z+R2ko70SvYNLYb4Gkgyfv+BFhAcUPGmpi2+eguZ2Cm4ITcjy5i4SjfczBr6RpHoSOe4+6NBIt43VWEbr4PPlgPhku8LsHhzh+c/L8VJr9P/dq0Y92EU3tsneFumt7z6IO8e5Dev7yJ8C4oXfw+sKLTWkQaWCwByJ5oWbMNCVS4U1OF1xZB6XH5zEpGEdYYAy4Oonij/YiKfzsHq5Nvt1m9vFnncdnddnFbvFddNBXaOY9ZQuNNDbWP2SD6Gfr434O7/vR5m47GbwZKo7WPOPsBhx0DD3duUPailLI7f/cdtcvhVnJjn/IpFBfwa7/q3Aa8aJ3fJ6oaL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Adds a new struct called `Page` that wraps a pointer to `struct page`. This struct is assumed to hold ownership over the page, so that Rust code can allocate and manage pages directly. The page type has various methods for reading and writing into the page. These methods will temporarily map the page to allow the operation. All of these methods use a helper that takes an offset and length, performs bounds checks, and returns a pointer to the given offset in the page. This patch only adds support for pages of order zero, as that is all Rust Binder needs. However, it is written to make it easy to add support for higher-order pages in the future. To do that, you would add a const generic parameter to `Page` that specifies the order. Most of the methods do not need to be adjusted, as the logic for dealing with mapping multiple pages at once can be isolated to just the `with_pointer_into_page` method. Rust Binder needs to manage pages directly as that is how transactions are delivered: Each process has an mmap'd region for incoming transactions. When an incoming transaction arrives, the Binder driver will choose a region in the mmap, allocate and map the relevant pages manually, and copy the incoming transaction directly into the page. This architecture allows the driver to copy transactions directly from the address space of one process to another, without an intermediate copy to a kernel buffer. This code is based on Wedson's page abstractions from the old rust branch, but it has been modified by Alice by removing the incomplete support for higher-order pages, by introducing the `with_*` helpers to consolidate the bounds checking logic into a single place, and various other changes. Co-developed-by: Wedson Almeida Filho Signed-off-by: Wedson Almeida Filho Reviewed-by: Andreas Hindborg Reviewed-by: Trevor Gross Reviewed-by: Benno Lossin Signed-off-by: Alice Ryhl Reviewed-by: Boqun Feng --- rust/bindings/bindings_helper.h | 1 + rust/helpers.c | 20 ++++ rust/kernel/alloc.rs | 7 ++ rust/kernel/lib.rs | 1 + rust/kernel/page.rs | 250 ++++++++++++++++++++++++++++++++++++++++ 5 files changed, 279 insertions(+) diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h index ddb5644d4fd9..0862261cfbed 100644 --- a/rust/bindings/bindings_helper.h +++ b/rust/bindings/bindings_helper.h @@ -20,6 +20,7 @@ /* `bindgen` gets confused at certain things. */ const size_t RUST_CONST_HELPER_ARCH_SLAB_MINALIGN = ARCH_SLAB_MINALIGN; +const size_t RUST_CONST_HELPER_PAGE_SIZE = PAGE_SIZE; const gfp_t RUST_CONST_HELPER_GFP_ATOMIC = GFP_ATOMIC; const gfp_t RUST_CONST_HELPER_GFP_KERNEL = GFP_KERNEL; const gfp_t RUST_CONST_HELPER_GFP_KERNEL_ACCOUNT = GFP_KERNEL_ACCOUNT; diff --git a/rust/helpers.c b/rust/helpers.c index 312b6fcb49d5..72361003ba91 100644 --- a/rust/helpers.c +++ b/rust/helpers.c @@ -25,6 +25,8 @@ #include #include #include +#include +#include #include #include #include @@ -93,6 +95,24 @@ int rust_helper_signal_pending(struct task_struct *t) } EXPORT_SYMBOL_GPL(rust_helper_signal_pending); +struct page *rust_helper_alloc_pages(gfp_t gfp_mask, unsigned int order) +{ + return alloc_pages(gfp_mask, order); +} +EXPORT_SYMBOL_GPL(rust_helper_alloc_pages); + +void *rust_helper_kmap_local_page(struct page *page) +{ + return kmap_local_page(page); +} +EXPORT_SYMBOL_GPL(rust_helper_kmap_local_page); + +void rust_helper_kunmap_local(const void *addr) +{ + kunmap_local(addr); +} +EXPORT_SYMBOL_GPL(rust_helper_kunmap_local); + refcount_t rust_helper_REFCOUNT_INIT(int n) { return (refcount_t)REFCOUNT_INIT(n); diff --git a/rust/kernel/alloc.rs b/rust/kernel/alloc.rs index f1c2c4aa22d2..7ab2b33f19d4 100644 --- a/rust/kernel/alloc.rs +++ b/rust/kernel/alloc.rs @@ -20,6 +20,13 @@ #[derive(Clone, Copy)] pub struct Flags(u32); +impl Flags { + /// Get the raw representation of this flag. + pub(crate) fn as_raw(self) -> u32 { + self.0 + } +} + impl core::ops::BitOr for Flags { type Output = Self; fn bitor(self, rhs: Self) -> Self::Output { diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs index 7ee807ae4680..048e1662829a 100644 --- a/rust/kernel/lib.rs +++ b/rust/kernel/lib.rs @@ -35,6 +35,7 @@ pub mod kunit; #[cfg(CONFIG_NET)] pub mod net; +pub mod page; pub mod prelude; pub mod print; mod static_assert; diff --git a/rust/kernel/page.rs b/rust/kernel/page.rs new file mode 100644 index 000000000000..121d20066645 --- /dev/null +++ b/rust/kernel/page.rs @@ -0,0 +1,250 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Kernel page allocation and management. + +use crate::{ + alloc::{AllocError, Flags}, + bindings, + error::code::*, + error::Result, + uaccess::UserSliceReader, +}; +use core::ptr::{self, NonNull}; + +/// A bitwise shift for the page size. +pub const PAGE_SHIFT: usize = bindings::PAGE_SHIFT as usize; + +/// The number of bytes in a page. +pub const PAGE_SIZE: usize = bindings::PAGE_SIZE; + +/// A bitmask that gives the page containing a given address. +pub const PAGE_MASK: usize = !(PAGE_SIZE - 1); + +/// A pointer to a page that owns the page allocation. +/// +/// # Invariants +/// +/// The pointer is valid, and has ownership over the page. +pub struct Page { + page: NonNull, +} + +// SAFETY: Pages have no logic that relies on them staying on a given thread, so moving them across +// threads is safe. +unsafe impl Send for Page {} + +// SAFETY: Pages have no logic that relies on them not being accessed concurrently, so accessing +// them concurrently is safe. +unsafe impl Sync for Page {} + +impl Page { + /// Allocates a new page. + /// + /// # Examples + /// + /// Allocate memory for a page. + /// + /// ``` + /// use kernel::page::Page; + /// + /// # fn dox() -> Result<(), kernel::alloc::AllocError> { + /// let page = Page::alloc_page(GFP_KERNEL)?; + /// # Ok(()) } + /// ``` + /// + /// Allocate memory for a page and zero its contents. + /// + /// ``` + /// use kernel::page::Page; + /// + /// # fn dox() -> Result<(), kernel::alloc::AllocError> { + /// let page = Page::alloc_page(GFP_KERNEL | __GFP_ZERO)?; + /// # Ok(()) } + /// ``` + pub fn alloc_page(flags: Flags) -> Result { + // SAFETY: Depending on the value of `gfp_flags`, this call may sleep. Other than that, it + // is always safe to call this method. + let page = unsafe { bindings::alloc_pages(flags.as_raw(), 0) }; + let page = NonNull::new(page).ok_or(AllocError)?; + // INVARIANT: We just successfully allocated a page, so we now have ownership of the newly + // allocated page. We transfer that ownership to the new `Page` object. + Ok(Self { page }) + } + + /// Returns a raw pointer to the page. + pub fn as_ptr(&self) -> *mut bindings::page { + self.page.as_ptr() + } + + /// Runs a piece of code with this page mapped to an address. + /// + /// The page is unmapped when this call returns. + /// + /// # Using the raw pointer + /// + /// It is up to the caller to use the provided raw pointer correctly. The pointer is valid for + /// `PAGE_SIZE` bytes and for the duration in which the closure is called. The pointer might + /// only be mapped on the current thread, and when that is the case, dereferencing it on other + /// threads is UB. Other than that, the usual rules for dereferencing a raw pointer apply: don't + /// cause data races, the memory may be uninitialized, and so on. + /// + /// If multiple threads map the same page at the same time, then they may reference with + /// different addresses. However, even if the addresses are different, the underlying memory is + /// still the same for these purposes (e.g., it's still a data race if they both write to the + /// same underlying byte at the same time). + fn with_page_mapped(&self, f: impl FnOnce(*mut u8) -> T) -> T { + // SAFETY: `page` is valid due to the type invariants on `Page`. + let mapped_addr = unsafe { bindings::kmap_local_page(self.as_ptr()) }; + + let res = f(mapped_addr.cast()); + + // This unmaps the page mapped above. + // + // SAFETY: Since this API takes the user code as a closure, it can only be used in a manner + // where the pages are unmapped in reverse order. This is as required by `kunmap_local`. + // + // In other words, if this call to `kunmap_local` happens when a different page should be + // unmapped first, then there must necessarily be a call to `kmap_local_page` other than the + // call just above in `with_page_mapped` that made that possible. In this case, it is the + // unsafe block that wraps that other call that is incorrect. + unsafe { bindings::kunmap_local(mapped_addr) }; + + res + } + + /// Runs a piece of code with a raw pointer to a slice of this page, with bounds checking. + /// + /// If `f` is called, then it will be called with a pointer that points at `off` bytes into the + /// page, and the pointer will be valid for at least `len` bytes. The pointer is only valid on + /// this task, as this method uses a local mapping. + /// + /// If `off` and `len` refers to a region outside of this page, then this method returns + /// `EINVAL` and does not call `f`. + /// + /// # Using the raw pointer + /// + /// It is up to the caller to use the provided raw pointer correctly. The pointer is valid for + /// `len` bytes and for the duration in which the closure is called. The pointer might only be + /// mapped on the current thread, and when that is the case, dereferencing it on other threads + /// is UB. Other than that, the usual rules for dereferencing a raw pointer apply: don't cause + /// data races, the memory may be uninitialized, and so on. + /// + /// If multiple threads map the same page at the same time, then they may reference with + /// different addresses. However, even if the addresses are different, the underlying memory is + /// still the same for these purposes (e.g., it's still a data race if they both write to the + /// same underlying byte at the same time). + fn with_pointer_into_page( + &self, + off: usize, + len: usize, + f: impl FnOnce(*mut u8) -> Result, + ) -> Result { + let bounds_ok = off <= PAGE_SIZE && len <= PAGE_SIZE && (off + len) <= PAGE_SIZE; + + if bounds_ok { + self.with_page_mapped(move |page_addr| { + // SAFETY: The `off` integer is at most `PAGE_SIZE`, so this pointer offset will + // result in a pointer that is in bounds or one off the end of the page. + f(unsafe { page_addr.add(off) }) + }) + } else { + Err(EINVAL) + } + } + + /// Maps the page and reads from it into the given buffer. + /// + /// This method will perform bounds checks on the page offset. If `offset .. offset+len` goes + /// outside ot the page, then this call returns `EINVAL`. + /// + /// # Safety + /// + /// * Callers must ensure that `dst` is valid for writing `len` bytes. + /// * Callers must ensure that this call does not race with a write to the same page that + /// overlaps with this read. + pub unsafe fn read_raw(&self, dst: *mut u8, offset: usize, len: usize) -> Result { + self.with_pointer_into_page(offset, len, move |src| { + // SAFETY: If `with_pointer_into_page` calls into this closure, then + // it has performed a bounds check and guarantees that `src` is + // valid for `len` bytes. + // + // There caller guarantees that there is no data race. + unsafe { ptr::copy_nonoverlapping(src, dst, len) }; + Ok(()) + }) + } + + /// Maps the page and writes into it from the given buffer. + /// + /// This method will perform bounds checks on the page offset. If `offset .. offset+len` goes + /// outside ot the page, then this call returns `EINVAL`. + /// + /// # Safety + /// + /// * Callers must ensure that `src` is valid for reading `len` bytes. + /// * Callers must ensure that this call does not race with a read or write to the same page + /// that overlaps with this write. + pub unsafe fn write_raw(&self, src: *const u8, offset: usize, len: usize) -> Result { + self.with_pointer_into_page(offset, len, move |dst| { + // SAFETY: If `with_pointer_into_page` calls into this closure, then it has performed a + // bounds check and guarantees that `dst` is valid for `len` bytes. + // + // There caller guarantees that there is no data race. + unsafe { ptr::copy_nonoverlapping(src, dst, len) }; + Ok(()) + }) + } + + /// Maps the page and zeroes the given slice. + /// + /// This method will perform bounds checks on the page offset. If `offset .. offset+len` goes + /// outside ot the page, then this call returns `EINVAL`. + /// + /// # Safety + /// + /// Callers must ensure that this call does not race with a read or write to the same page that + /// overlaps with this write. + pub unsafe fn fill_zero_raw(&self, offset: usize, len: usize) -> Result { + self.with_pointer_into_page(offset, len, move |dst| { + // SAFETY: If `with_pointer_into_page` calls into this closure, then it has performed a + // bounds check and guarantees that `dst` is valid for `len` bytes. + // + // There caller guarantees that there is no data race. + unsafe { ptr::write_bytes(dst, 0u8, len) }; + Ok(()) + }) + } + + /// Copies data from userspace into this page. + /// + /// This method will perform bounds checks on the page offset. If `offset .. offset+len` goes + /// outside ot the page, then this call returns `EINVAL`. + /// + /// Like the other `UserSliceReader` methods, data races are allowed on the userspace address. + /// However, they are not allowed on the page you are copying into. + /// + /// # Safety + /// + /// Callers must ensure that this call does not race with a read or write to the same page that + /// overlaps with this write. + pub unsafe fn copy_from_user_slice_raw( + &self, + reader: &mut UserSliceReader, + offset: usize, + len: usize, + ) -> Result { + self.with_pointer_into_page(offset, len, move |dst| { + // SAFETY: If `with_pointer_into_page` calls into this closure, then it has performed a + // bounds check and guarantees that `dst` is valid for `len` bytes. Furthermore, we have + // exclusive access to the slice since the caller guarantees that there are no races. + reader.read_raw(unsafe { core::slice::from_raw_parts_mut(dst.cast(), len) }) + }) + } +} + +impl Drop for Page { + fn drop(&mut self) { + // SAFETY: By the type invariants, we have ownership of the page and can free it. + unsafe { bindings::__free_pages(self.page.as_ptr(), 0) }; + } +}