From patchwork Fri Nov 22 15:40:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alice Ryhl X-Patchwork-Id: 13883303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A892E6916F for ; Fri, 22 Nov 2024 15:41:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B24226B009C; Fri, 22 Nov 2024 10:41:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AD3EF6B009F; Fri, 22 Nov 2024 10:41:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 900166B00A2; Fri, 22 Nov 2024 10:41:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6E3E56B009C for ; Fri, 22 Nov 2024 10:41:11 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 32C951A17CD for ; Fri, 22 Nov 2024 15:41:11 +0000 (UTC) X-FDA: 82814143188.12.724DB71 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf11.hostedemail.com (Postfix) with ESMTP id EEC0140004 for ; Fri, 22 Nov 2024 15:40:00 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=hRIkoEio; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 3EqZAZwkKCKEBMJDFSZIMHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--aliceryhl.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3EqZAZwkKCKEBMJDFSZIMHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--aliceryhl.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732289975; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rpRgqzz82sEijd8ek2p3HLExnZlmTISx521fYTMYg7A=; b=hDafaRr9tDDWn8qyaqiASgpLlWBIn2Bk/7NgT5EVECp67QU3OXlDMM1QNikaZIApPnkQqJ Kxnvvc2AGMYiYdgaM5vjwt5kyWNONRITFMvHA5CuC1h8+xrusl21qN2l51KI7mbtuNDoZI covPEFm4lGMaH0AfyIh2pjGV2yGukR0= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=hRIkoEio; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 3EqZAZwkKCKEBMJDFSZIMHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--aliceryhl.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3EqZAZwkKCKEBMJDFSZIMHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--aliceryhl.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732289975; a=rsa-sha256; cv=none; b=ipxikfMBKtJroS4hP1bPHDhk6bZgUhaP4Re/AvywBbmlDBXBLRKYStkhd6g5VZ0rEiqg3I 5xChXDSLdzDBzsg+fBZq17oCqJgpqiQHR/Mig0dYmguAJqxxXEZBWGEUcPGyMTUVUH3DQw 7oDIWH1TM3Zy9pkVfjVjpBjelERtX7c= Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43159c07193so19857765e9.0 for ; Fri, 22 Nov 2024 07:41:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732290067; x=1732894867; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rpRgqzz82sEijd8ek2p3HLExnZlmTISx521fYTMYg7A=; b=hRIkoEio3+HX4Jqrl3WCN5vMa2kP8+rXJ9T+aULDSTIS44GFdQ5bvsqIZGhQTqcgPl cCnTls2UEH7URqG4tDQC7czt07f5CTfz8cHxYvmTxz407SRJj+2tlqzZ0vvDdeaCDcD6 48IqaHsknx5iTvCr/reKL4PMIywt+KsYrnTqu6AexprXvjK3oNYenpQ0bNZeFf1nvnCT tK/q0G3F/1KyP5BFOirfEsb9GAegNd+j1D1lbCiD3Fatcp5ce3Kiak9hTw8+ewUol+8g 1vju5SwPm0EYncPf4Vgw2gwO8rLpIWK3tVgBo3XPZIVww4ix2wwFvUXxw38dC23NiROT ffag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732290067; x=1732894867; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rpRgqzz82sEijd8ek2p3HLExnZlmTISx521fYTMYg7A=; b=OwDVNxyIqQHytBaw0758aK3Ywf0XI7jOb4s5Xgt58FRnsJHuuyghztuL824UJ/lmDe o4+u9MBWqDQ+4tc7Ybv935AhLnSdWRqQLVJ+FkFR0g6qgceGdH8pRPcgGvuX6KjBsiac rmgwmnaNZwLzd+pu3z9YlcUxfZkZZzqh+wj4Je3op1hLHvF1yq1aouWeWuxdZFJX2LLU ErIrEGE2xjGtgkzTGpAznX0G/b0HDRaHftSA3LXAmBxd98/BsnCqdZuI1vA4tbv3ZcNP ECeB+IH4oLyftoU8lrfbvML54nHryRLdYXWBLIwo/6pPQxEoZn65UdMzo9+bYYqfqSH8 ovjg== X-Forwarded-Encrypted: i=1; AJvYcCWzC4+bgGZ1BoflGsX8L5BeAfuQNqH7YJpck+Recg734sZ5Af4cXMP13o8Rx1+pjxmYMBVeQJe4eA==@kvack.org X-Gm-Message-State: AOJu0Yws58wzgwAdWIw8oW6ITpdxdRoWXe/yj1kOtD7WCAZJKnOTPa1O uxliqEH40hG8D2pkjsYZnYQMrNvvPS83ySStQM80MJcxSAPd89fNc4miiIJfBSTLZHhlty/B80b jiFcPKnPX2HQqPw== X-Google-Smtp-Source: AGHT+IESpHOo7osW11QO7Begd6/hgV+HnuolBUZ1nG+UDTyG0tnysPZ8TSCOWeW0XPKFP+U93HN/frQCkrDDEt4= X-Received: from aliceryhl.c.googlers.com ([fda3:e722:ac3:cc00:68:fe9:ac10:f29e]) (user=aliceryhl job=sendgmr) by 2002:a05:600c:1d0f:b0:42e:e66c:2a8e with SMTP id 5b1f17b1804b1-433ce4fa122mr290465e9.7.1732290066790; Fri, 22 Nov 2024 07:41:06 -0800 (PST) Date: Fri, 22 Nov 2024 15:40:27 +0000 In-Reply-To: <20241122-vma-v9-0-7127bfcdd54e@google.com> Mime-Version: 1.0 References: <20241122-vma-v9-0-7127bfcdd54e@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=9346; i=aliceryhl@google.com; h=from:subject:message-id; bh=r818IxuzXbN9g3c4yrMD7Y71e/4bceEXKMyljx6a6J0=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBnQKYH5R8PxJfDPHVIRhFqASmlS4/vOqLyQrhYP 00O1+uub3yJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZ0CmBwAKCRAEWL7uWMY5 RthAD/4+eTK7BgZe7rEzEuKSggCjNmJFDiXSil0wpt6Mtne5FMxrKaoPhFbPYUG7/eQKsmKd229 R+9N4RwgUgUz9BDTGtPkfqyzHqDWGgcDvh2HBj182rXM5D8w74KVjyOaD6qLk6K4zdAZxXKjbfQ 2vVRpOmJfMVsNxsv86dNQ+D98nJ0LmV2m6gqctO1QAIsXh38mtiOMuvFDh4NlHe0qyLeQp/aoD/ t8PsiCifW8wxZl5nfaOS1E4RuxMvOM2dxx0zrqWIfLMAE2RlhCrZpMYwnwVifnlvhRljlRmkbPX XFFk49kIp7UkagsYcfTJLpdeEIJlv/scIG8ko6s2WuT8XvSQOL14tgt8d2W1Z88SQD6++dY432o POxTp6FvrQbGURx2KiscUqumI4RiGASEFAjaBLWjk8j1qDcBDUp0nz1E1sOuukO9ibjVE61txXK PnGgib++npZ5+cYCldHT/A22ixyl9VwI3s0+iUGGJlJo1Tu7sqjaLd42Az9hG6a0PTW4Q5kXVcE Hsh7yvtZqWvj2o5XbjW9n2WGPZnQFv07376GF34phwW9EqaYwjLbjjHEGwiBQ2PpaL/Z4CgHc52 I48IUA9YXGDW+Vbed1yYkq9UwLdrLtszSN3Nvb9VOK6rEHAtc/liw2mXu9jYxYry7ThWaXmPpnx c1Ug0Ul61WNzz6A== X-Mailer: b4 0.13.0 Message-ID: <20241122-vma-v9-2-7127bfcdd54e@google.com> Subject: [PATCH v9 2/8] mm: rust: add vm_area_struct methods that require read access From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Lorenzo Stoakes , Vlastimil Babka , John Hubbard , "Liam R. Howlett" , Andrew Morton , Greg Kroah-Hartman , Arnd Bergmann , Christian Brauner , Jann Horn , Suren Baghdasaryan Cc: Alex Gaynor , Boqun Feng , Gary Guo , " =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= " , Benno Lossin , linux-kernel@vger.kernel.org, linux-mm@kvack.org, rust-for-linux@vger.kernel.org, Alice Ryhl , Andreas Hindborg X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: EEC0140004 X-Stat-Signature: noscbihgccuszj76ssnyo6sjtfw6z74o X-HE-Tag: 1732290000-928771 X-HE-Meta: U2FsdGVkX18JhuIK3AGGQt7/0NOtXibDVTVCekDem3SOy6fcAIUmxmfBBD/hHWIx0EZ2CmYWlNR7WK8XUR6SKIwgf+x2L5TOyOWx+fLZetlGUKvG98dkWC1OrjYrRUutESEDoqx+mOZL/p45fOfaDmI74bn0XwG56ZbDQvxgBeCRRgCXm/irQYcsN3qJqFslu2hoZYjyWq5xDP+DM78eRrIGDn53O96FieJdjpguf/CqkWVkybpRoK+lcYxjDBi1HqIVF2A7CYEuuQ8x/3tLJRNqXL73/fkciCu5yvTedlZ82WiNOl2YtrrCfTvut2/2u5PoFp3rDZ4KRUm/G2zJ5ikvHTmUdDTpVpd/mthHgREdIwzRCYFvMeDB9ii9FXIklRaF8dXHDNZ2O3wCiBKvxmEPBPYkI+BbeRnIiFmZ7T1DNf7v7xt5yfPjtlxu3xJXVWqnPDt8USNubvTvhsXhTHlsHXo7+PGP6jtnyThA/ZeMZwgasdPE2RVdS1FaqNcwYu7+HUL0f9tfbeHNP6SNVpAYcIlgVf1OYbrVr8C5YyKnYEopt4lNMPxCV4XL9f6rIamn3DwEHQx1rwc701E3DBN7hWIB4HRRpgMFSeJFGzNbrs0jBB6Nvs79tu5O06rqQZ+e4eiFSjoo4m0ySJzJ7eywfqoHWzUjx53OE3IO241VmKnVPkcq/b1I2VSK4DFUd+vjZfFWYb4YKsanZP1A8vs2pM37XQXNzLwPdR/wbkAStn4OLvpKdu9PKZXA85GPAwky27YAv16F80pCCdGB4dP59BdSRf+COQub+Bjs2hVP0bDvpUUAcu5iE8u6fak0VbVzo/5YRwL3y1CYxfYja+a/hahR4UePZpg4e4+DWRQHTB3Fw5hb3sj0zXJQDndmlzDdPCOgWswOpBPgg/g2gBt6NcsFR4kpV+n/D+c8s+lICQ95b3a1W09wdWiMlqQVXYme6lE+8isC4sARI13 q6kOQpNH hj8LdAEL72viP6PkvzDD73sWuFniTpSrZsgbuamJM7BWHpf5VDgkbpypE2052usX0OHxxHuJ4Se+pUB6r+5kNsPrXRo7vCxPr/V6qJJoWW9beqcjS8h6FmFrvgH5tMUEHIJupnrfFkfdLBLKObE2xU+mSarl6UUORF1sA9MFFf+hObU6eESqa7Te0kDGtjEBiTkpCpp6Ux+iQN6M1ehpr6WlXACHSCXMqKckzqc/WYR54hRldQPZIu2y0DPsqEr/Kjbm0WAVyDPNXjeJZbEHaWZ2KCqzv2hrRiaKhXtwL7w6EWztEDDAdqQYKboLnPLBQ6OFDSZXlx6TsUJVLQ8AUB1OODMAfahaAdTrpa3keI4PvbNzgE2b4/azc/47T77XFi1VB+3t3LXOs5TNlENpJYHoij82uw/NAxEFb3sL+KDJ0B9QYcnEMXRaCBUsM6seD7ztj5lao4/GKxAHxjbfTs+HYvn80qA56Q+euVI/YGmAbM3iFlysE1+6itzszyf5t5EP+5sgOBf+xsuvs0cw5EnYj+61vscYzBF72Zx10lPXTuSzidWcC0B7XeXUUVIE8XfmgqEfkjePn8lkx+XeukzzbRvtU/WArSt01rCDE/HQvS10vI6KYGNiNLAXGlNqNLYXxueLB1Q3JgSIO5BhcZbymsxlvC01sC7Or4qCJsd3BRjhTEwzzz7f3n+4E9QKKWXOpvwFp6x0mf6cJ5ggOXTtKjg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This adds a type called VmAreaRef which is used when referencing a vma that you have read access to. Here, read access means that you hold either the mmap read lock or the vma read lock (or stronger). Additionally, a vma_lookup method is added to the mmap read guard, which enables you to obtain a &VmAreaRef in safe Rust code. This patch only provides a way to lock the mmap read lock, but a follow-up patch also provides a way to just lock the vma read lock. Acked-by: Lorenzo Stoakes (for mm bits) Signed-off-by: Alice Ryhl --- rust/helpers/mm.c | 6 ++ rust/kernel/mm.rs | 21 ++++++ rust/kernel/mm/virt.rs | 176 +++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 203 insertions(+) diff --git a/rust/helpers/mm.c b/rust/helpers/mm.c index 7201747a5d31..7b72eb065a3e 100644 --- a/rust/helpers/mm.c +++ b/rust/helpers/mm.c @@ -37,3 +37,9 @@ void rust_helper_mmap_read_unlock(struct mm_struct *mm) { mmap_read_unlock(mm); } + +struct vm_area_struct *rust_helper_vma_lookup(struct mm_struct *mm, + unsigned long addr) +{ + return vma_lookup(mm, addr); +} diff --git a/rust/kernel/mm.rs b/rust/kernel/mm.rs index 84cba581edaa..ace8e7d57afe 100644 --- a/rust/kernel/mm.rs +++ b/rust/kernel/mm.rs @@ -12,6 +12,8 @@ }; use core::{ops::Deref, ptr::NonNull}; +pub mod virt; + /// A wrapper for the kernel's `struct mm_struct`. /// /// Since `mm_users` may be zero, the associated address space may not exist anymore. You can use @@ -210,6 +212,25 @@ pub struct MmapReadGuard<'a> { _nts: NotThreadSafe, } +impl<'a> MmapReadGuard<'a> { + /// Look up a vma at the given address. + #[inline] + pub fn vma_lookup(&self, vma_addr: usize) -> Option<&virt::VmAreaRef> { + // SAFETY: We hold a reference to the mm, so the pointer must be valid. Any value is okay + // for `vma_addr`. + let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr as _) }; + + if vma.is_null() { + None + } else { + // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore, + // the returned area will borrow from this read lock guard, so it can only be used + // while the mmap read lock is still held. + unsafe { Some(virt::VmAreaRef::from_raw(vma)) } + } + } +} + impl Drop for MmapReadGuard<'_> { #[inline] fn drop(&mut self) { diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs new file mode 100644 index 000000000000..6df145fea128 --- /dev/null +++ b/rust/kernel/mm/virt.rs @@ -0,0 +1,176 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Copyright (C) 2024 Google LLC. + +//! Virtual memory. + +use crate::{bindings, types::Opaque}; + +/// A wrapper for the kernel's `struct vm_area_struct` with read access. +/// +/// It represents an area of virtual memory. +/// +/// # Invariants +/// +/// The caller must hold the mmap read lock or the vma read lock. +#[repr(transparent)] +pub struct VmAreaRef { + vma: Opaque, +} + +// Methods you can call when holding the mmap or vma read lock (or strong). They must be usable no +// matter what the vma flags are. +impl VmAreaRef { + /// Access a virtual memory area given a raw pointer. + /// + /// # Safety + /// + /// Callers must ensure that `vma` is valid for the duration of 'a, and that the mmap or vma + /// read lock (or stronger) is held for at least the duration of 'a. + #[inline] + pub unsafe fn from_raw<'a>(vma: *const bindings::vm_area_struct) -> &'a Self { + // SAFETY: The caller ensures that the invariants are satisfied for the duration of 'a. + unsafe { &*vma.cast() } + } + + /// Returns a raw pointer to this area. + #[inline] + pub fn as_ptr(&self) -> *mut bindings::vm_area_struct { + self.vma.get() + } + + /// Returns the flags associated with the virtual memory area. + /// + /// The possible flags are a combination of the constants in [`flags`]. + #[inline] + pub fn flags(&self) -> vm_flags_t { + // SAFETY: By the type invariants, the caller holds at least the mmap read lock, so this + // access is not a data race. + unsafe { (*self.as_ptr()).__bindgen_anon_2.vm_flags as _ } + } + + /// Returns the (inclusive) start address of the virtual memory area. + #[inline] + pub fn start(&self) -> usize { + // SAFETY: By the type invariants, the caller holds at least the mmap read lock, so this + // access is not a data race. + unsafe { (*self.as_ptr()).__bindgen_anon_1.__bindgen_anon_1.vm_start as _ } + } + + /// Returns the (exclusive) end address of the virtual memory area. + #[inline] + pub fn end(&self) -> usize { + // SAFETY: By the type invariants, the caller holds at least the mmap read lock, so this + // access is not a data race. + unsafe { (*self.as_ptr()).__bindgen_anon_1.__bindgen_anon_1.vm_end as _ } + } + + /// Zap pages in the given page range. + /// + /// This clears page table mappings for the range at the leaf level, leaving all other page + /// tables intact, and freeing any memory referenced by the VMA in this range. That is, + /// anonymous memory is completely freed, file-backed memory has its reference count on page + /// cache folio's dropped, any dirty data will still be written back to disk as usual. + #[inline] + pub fn zap_page_range_single(&self, address: usize, size: usize) { + // SAFETY: By the type invariants, the caller has read access to this VMA, which is + // sufficient for this method call. This method has no requirements on the vma flags. Any + // value of `address` and `size` is allowed. + unsafe { + bindings::zap_page_range_single( + self.as_ptr(), + address as _, + size as _, + core::ptr::null_mut(), + ) + }; + } +} + +/// The integer type used for vma flags. +#[doc(inline)] +pub use bindings::vm_flags_t; + +/// All possible flags for [`VmAreaRef`]. +pub mod flags { + use super::vm_flags_t; + use crate::bindings; + + /// No flags are set. + pub const NONE: vm_flags_t = bindings::VM_NONE as _; + + /// Mapping allows reads. + pub const READ: vm_flags_t = bindings::VM_READ as _; + + /// Mapping allows writes. + pub const WRITE: vm_flags_t = bindings::VM_WRITE as _; + + /// Mapping allows execution. + pub const EXEC: vm_flags_t = bindings::VM_EXEC as _; + + /// Mapping is shared. + pub const SHARED: vm_flags_t = bindings::VM_SHARED as _; + + /// Mapping may be updated to allow reads. + pub const MAYREAD: vm_flags_t = bindings::VM_MAYREAD as _; + + /// Mapping may be updated to allow writes. + pub const MAYWRITE: vm_flags_t = bindings::VM_MAYWRITE as _; + + /// Mapping may be updated to allow execution. + pub const MAYEXEC: vm_flags_t = bindings::VM_MAYEXEC as _; + + /// Mapping may be updated to be shared. + pub const MAYSHARE: vm_flags_t = bindings::VM_MAYSHARE as _; + + /// Page-ranges managed without `struct page`, just pure PFN. + pub const PFNMAP: vm_flags_t = bindings::VM_PFNMAP as _; + + /// Memory mapped I/O or similar. + pub const IO: vm_flags_t = bindings::VM_IO as _; + + /// Do not copy this vma on fork. + pub const DONTCOPY: vm_flags_t = bindings::VM_DONTCOPY as _; + + /// Cannot expand with mremap(). + pub const DONTEXPAND: vm_flags_t = bindings::VM_DONTEXPAND as _; + + /// Lock the pages covered when they are faulted in. + pub const LOCKONFAULT: vm_flags_t = bindings::VM_LOCKONFAULT as _; + + /// Is a VM accounted object. + pub const ACCOUNT: vm_flags_t = bindings::VM_ACCOUNT as _; + + /// Should the VM suppress accounting. + pub const NORESERVE: vm_flags_t = bindings::VM_NORESERVE as _; + + /// Huge TLB Page VM. + pub const HUGETLB: vm_flags_t = bindings::VM_HUGETLB as _; + + /// Synchronous page faults. (DAX-specific) + pub const SYNC: vm_flags_t = bindings::VM_SYNC as _; + + /// Architecture-specific flag. + pub const ARCH_1: vm_flags_t = bindings::VM_ARCH_1 as _; + + /// Wipe VMA contents in child on fork. + pub const WIPEONFORK: vm_flags_t = bindings::VM_WIPEONFORK as _; + + /// Do not include in the core dump. + pub const DONTDUMP: vm_flags_t = bindings::VM_DONTDUMP as _; + + /// Not soft dirty clean area. + pub const SOFTDIRTY: vm_flags_t = bindings::VM_SOFTDIRTY as _; + + /// Can contain `struct page` and pure PFN pages. + pub const MIXEDMAP: vm_flags_t = bindings::VM_MIXEDMAP as _; + + /// MADV_HUGEPAGE marked this vma. + pub const HUGEPAGE: vm_flags_t = bindings::VM_HUGEPAGE as _; + + /// MADV_NOHUGEPAGE marked this vma. + pub const NOHUGEPAGE: vm_flags_t = bindings::VM_NOHUGEPAGE as _; + + /// KSM may merge identical pages. + pub const MERGEABLE: vm_flags_t = bindings::VM_MERGEABLE as _; +}