From patchwork Sun Aug 13 18:25:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fabio M. De Francesco" X-Patchwork-Id: 13352210 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44406C001DE for ; Sun, 13 Aug 2023 18:26:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 341A88D0002; Sun, 13 Aug 2023 14:26:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2CA728D0001; Sun, 13 Aug 2023 14:26:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1444C8D0002; Sun, 13 Aug 2023 14:26:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id F1B828D0001 for ; Sun, 13 Aug 2023 14:26:02 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B0199120456 for ; Sun, 13 Aug 2023 18:26:02 +0000 (UTC) X-FDA: 81119910564.27.E0AFC3B Received: from mail-wm1-f44.google.com (mail-wm1-f44.google.com [209.85.128.44]) by imf19.hostedemail.com (Postfix) with ESMTP id ECFF31A000E for ; Sun, 13 Aug 2023 18:25:59 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b="JdIB/3t8"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf19.hostedemail.com: domain of fmdefrancesco@gmail.com designates 209.85.128.44 as permitted sender) smtp.mailfrom=fmdefrancesco@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1691951160; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=CIq8sIl/X2b7MlKyknohqi2WsE1jvwSaGlbdtI8w73Q=; b=GKWXKZc3IrT14+Y9esswSrk6JjRZq9lIWw25aOSplCk1PBtIdth0E321iZzKO9lviC0qKH 2KtKVIqzen6MNlI5ylUBXecCOS5N+KDpQtti2BKTTapjqjJNkDh8N8ua/PibJ6a6HLwocp zNkP6cQEQqFF8Vdfg04siP6H4AhPLKI= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b="JdIB/3t8"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf19.hostedemail.com: domain of fmdefrancesco@gmail.com designates 209.85.128.44 as permitted sender) smtp.mailfrom=fmdefrancesco@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1691951160; a=rsa-sha256; cv=none; b=OyjSMrsskGwhJeMCZhaFUKVeD26nBRuhXPzqt6eMIuEZbjSIXRSP3dDLz6W2pOnuDsdkJc 6m9ebvV/CoS6cCTb7/83GinLeHAWLps7Uk/7gmDTE6ds+vMMc9ggdRk/ggTc0OF5qGsDZp BZRKHvAtfHaUqhG1EndbBO2xQLLxmBg= Received: by mail-wm1-f44.google.com with SMTP id 5b1f17b1804b1-3fe9c20f449so2919395e9.3 for ; Sun, 13 Aug 2023 11:25:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691951158; x=1692555958; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=CIq8sIl/X2b7MlKyknohqi2WsE1jvwSaGlbdtI8w73Q=; b=JdIB/3t8M3PQ2ABKIlUA3PvTiusA3oDqo27jm1ZUL3ssYJYP1+CvD1yRDJb1qky4Xe JKL+W4OFc7YP7LtGvMMdVSYm2yvKQ/LGKEHWTjXT2LpDBCrWgNNwso+aQ82xAM7Dis9z 9pyQ6BFVrIbB6LMT2rXXFPaj96SBdri8pKzW73R8W8d6tCQQPa+laBJd8WktGS2Y4hm0 sjLsmPNlFBvph37k7KzINI8tn+QIuFZi5FJI3hpoSVrdT8iYHfBVtGRbmfAfkHDfu24B po3rgk2XdZSytn8mke9aXjrlauDuKMuHgmTEWZtlFMYBntjmsMRayleaYfMW1z+6FjAT eOMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691951158; x=1692555958; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=CIq8sIl/X2b7MlKyknohqi2WsE1jvwSaGlbdtI8w73Q=; b=jpVtAdcSBTDZ1gwIhX8vRUVGZmsftY3TfB4cSbRIyU1Mkb8AOWj2tymPpt23XQXqgY RCfN+lcHt8QXvfyPf5uEkdfofUWh1xez4SWpwfwjHSTaw1J1BntgLeYXQY7ofmcjE0aQ PPbRS7jMCExBSOo6udTvDMNnEmvBPAwSpEj6nsJlLNEmD0QLX3RCS3PxFgaRnF74SkQT velREeTgG7z6ewMMCmswDJgDY7925U8B9ru9AAIoGai8usQhfQ9E/iwQCR+0rqjihthN XmX27nTVUXFlkdEHqazDW21dqCPQZrV50vigNnWN30JKOZwp9AHvfSaHRHT+nzGo7CQT MH9g== X-Gm-Message-State: AOJu0YxHOiPUP8zjgQBCqRMNuaS8Z3C9iGvYqDEJGAVfUNX0NxU/Ervs Ty8YV7geyikhTf9h0scMmJk= X-Google-Smtp-Source: AGHT+IH3cPNtVJW4kh/iHxFuCrnQ/1tjSjbQPw4xSIR+5xlKJ57JVXqoqILLkUFlM1N8cQwVEF1cKQ== X-Received: by 2002:a05:6000:42:b0:317:5b32:b2c3 with SMTP id k2-20020a056000004200b003175b32b2c3mr4907641wrx.6.1691951157713; Sun, 13 Aug 2023 11:25:57 -0700 (PDT) Received: from localhost.localdomain (host-95-239-194-68.retail.telecomitalia.it. [95.239.194.68]) by smtp.gmail.com with ESMTPSA id s6-20020a5d5106000000b003141a3c4353sm12058239wrt.30.2023.08.13.11.25.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 13 Aug 2023 11:25:56 -0700 (PDT) From: "Fabio M. De Francesco" To: Jonathan Corbet , Jonathan Cameron , Linus Walleij , Mike Rapoport , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: "Fabio M. De Francesco" , Andrew Morton , Ira Weiny , Matthew Wilcox , Randy Dunlap Subject: [PATCH v2] Documentation/page_tables: Add info about MMU/TLB and Page Faults Date: Sun, 13 Aug 2023 20:25:42 +0200 Message-ID: <20230813182552.31792-1-fmdefrancesco@gmail.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 X-Rspamd-Queue-Id: ECFF31A000E X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 194zy6ruhixmk7jpekp84ef5owanegtq X-HE-Tag: 1691951159-260804 X-HE-Meta: U2FsdGVkX1+L25TsEWm7WmludfFOIZGxaeIwg1t1jPHRiyGz3obePyAuGwpMo3u6h+Km9N7SpxeNNu+CdE4MQiUnABaJMfIpwHIwxzMC2XRdn02WWhDEtca1o67JMNj8wYH7x29HJYhV+6h2hUb5cttDMvZdXQwLnO42td3QmTk+tPxhVKrjO4fuI0hfwbr3v8YayRSNrusuGPDbPeDgC+u1NX/T0s9chv5JKQn5F6b8AmlpW5S58sphWuUxaKg0UVV/b9N6kg3zxpaicaWZNXxILWJ2vcIEcjLgC0gLodQ51VQGkrxDcQGm44DxOR1O/L2Q6TGTc9m6zz/nFNugdApyh8N03l3eFZAZH1geMXl3GeU1HinwORlLnVCGmZqQnsTHHy/B7w2aDCOJKtR8Evbi7oMtyvhV0Y91hIOGxDaagz48ggHsK/nYlKli6pekVvfyRhpAL1ZRDJNO8pcUzZwnYNVHmTPieQ/KQANppKdUfJ60LKvV+lcyZdzC7IyD7yNMFuy/CwywN55/UtEAPYpwE7NtEdWrb6ZLsi7QFiyO43lX7FujHyoKM4FlYoVlQwle+c2yDvzmv9tntGSiP6AyttkHuy2KsR+B3RlnFrKFou3o1Co6X71pENHf/ekHRMXxiFEzOfQfHvat/vne8RbWEDmCY9SXeu4d274+2Hay5SpMJ9exkrZlTE29pUhdBfMLV7eZhhZ/ffSYyD4SlEnxdc81J5j8GlK5DC2yFd+M+GxNJzwnydEANTSA3974hoI11D6xqhdjXs9PBetlPa/U4hLbYYdI9XyXXeDKqxiR96B9FWpG4ZYI5WPkPrN5YSdWCdHLxjlLS7caDVm7yrjOp/V9LYTH4KI0NobijuOsX+laV8WkuG4xBseVrIxOYhIBsMRUr5cZkrWgAXfaxozoZ3CaV2sCFTNdxF87cPBHWweqWvHQYAurMqbzSUUGDzrpdwJbxQCpS6tCSL+ Nkl6fLVF uxcQN66M6e6FM8gAySFYNKjjvcJjhf8aDks88HSOGicTxyro3UE+XjpB6C2HqMXpW9U1TD3TM5qkP42ARa0wUnL2lHIlBzGPXTNZwDRaNikyXAbvNeGOO8qluNzApbdeASq4w/on0Np++rNk8YlwaBjf5nTIZbZiQhnbcPh96wOR/8xKMHlU3wH73Uf8ekiL+ZR8Tca6bUGIiYthBMADVS42zUntzNetCYZq1GUhZ/4ZiECXDLlexh+CVUBZnSHm5OsQVcZmRUHMbVOQpAVjqV0Zy6UcVojruKizsNKayVVw+Ap9Ru3t9MkLukHFN5Zwmm6B6g9eQ2UdhIWC7oCfFm4++EfP9WDb1PwJ5cwV8sbDwOL7xsBPIOzMhyqTAKjsUg87O+rfi+YcK3qqglcQBg1lW11Fa+90NJnK+hx2H618bddoDjlg3xrUuwkBUXFkqKAmVN59cgx0P/JZdiREdAZ9WgtJ6Uav6XZn/pz9LpX5KvMIeAd3/fng5UoXZnDb4WmohcfwAR6+r6I94uC4bnb1jaloa2PyUfIRE4SZfoKWPFWQb2kdWy3dKxpFsu2Us1/Zez5lsX/ex6BvbHq5689RO4ZIUiG/05wMbqv8SqQTSMIiHxULCGiq+TakbzpVXJ33B/r+VWVYS5BPfv+HBWfwdFZ3zDASSqOYdybPzlLP8E5Cf4tvmzlWf0LdU3ET+oFxIfVp6ra8Mu7VKxHj3GT7nQ3lfMk0gZQfAdteNrklwQVa0v7UysPzdbau8MYWBptqEiAu7+Z/uGlrJvQVM+xyV8lF5dwJH3eGc6b9iLWDj8DmhRc7CWNbB9lWToyiwbnXvDRbbbalbAGI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Extend page_tables.rst by adding a section about the role of MMU and TLB in translating between virtual addresses and physical page frames. Furthermore explain the concept behind Page Faults and how the Linux kernel handles TLB misses. Finally briefly explain how and why to disable the page faults handler. Cc: Andrew Morton Cc: Ira Weiny Cc: Jonathan Cameron Cc: Jonathan Corbet Cc: Linus Walleij Cc: Matthew Wilcox Cc: Mike Rapoport Cc: Randy Dunlap Signed-off-by: Fabio M. De Francesco Reviewed-by: Linus Walleij --- v1 -> v2: This version takes into account the comments provided by Mike (thanks!). I hope I haven't overlooked anything he suggested :-) https://lore.kernel.org/all/20230807105010.GK2607694@kernel.org/ Furthermore, v2 adds few more information about swapping which was not present in v1. before the "real" patch, this has been an RFC PATCH in its 2nd version for a week or so until I received comments and suggestions from Jonathan Cameron (thanks!), and then it morphed to a real patch. The link to the thread with the RFC PATCH v2 and the messages between Jonathan and me start at https://lore.kernel.org/all/20230723120721.7139-1-fmdefrancesco@gmail.com/#r Documentation/mm/page_tables.rst | 128 +++++++++++++++++++++++++++++++ 1 file changed, 128 insertions(+) diff --git a/Documentation/mm/page_tables.rst b/Documentation/mm/page_tables.rst index 7840c1891751..ad9e52f2d7f1 100644 --- a/Documentation/mm/page_tables.rst +++ b/Documentation/mm/page_tables.rst @@ -152,3 +152,131 @@ Page table handling code that wishes to be architecture-neutral, such as the virtual memory manager, will need to be written so that it traverses all of the currently five levels. This style should also be preferred for architecture-specific code, so as to be robust to future changes. + + +MMU, TLB, and Page Faults +========================= + +The `Memory Management Unit (MMU)` is a hardware component that handles virtual +to physical address translations. It may use relatively small caches in hardware +called `Translation Lookaside Buffers (TLBs)` and `Page Walk Caches` to speed up +these translations. + +When CPU accesses a memory location, it provides a virtual address to the MMU, +which checks if there is the existing translation in the TLB or in the Page +Walk Caches (on architectures that support them). If no translation is found, +MMU uses the page walks to determine the physical address and create the map. + +The dirty bit for a page is set (i.e., turned on) when the page is written to. +Each page of memory has associated permission and dirty bits. The latter +indicate that the page has been modified since it was loaded into memory. + +If nothing prevents it, eventually the physical memory can be accessed and the +requested operation on the physical frame is performed. + +There are several reasons why the MMU can't find certain translations. It could +happen because the CPU is trying to access memory that the current task is not +permitted to, or because the data is not present into physical memory. + +When these conditions happen, the MMU triggers page faults, which are types of +exceptions that signal the CPU to pause the current execution and run a special +function to handle the mentioned exceptions. + +Page faults may be caused by code bugs or by maliciously crafted addresses that +the CPU is instructed to dereference and access. A thread of a process could +use an instruction to address (non-shared) memory which does not belong to its +own address space, or could try to execute an instruction that want to write to +a read-only location. + +If the above-mentioned conditions happen in user-space, the kernel sends a +`Segmentation Fault` (SIGSEGV) signal to the current thread. That signal usually +causes the termination of the thread and of the process it belongs to. + +Instead, there are also common and expected other causes of page faults. These +are triggered by process management optimization techniques called "Lazy +Allocation" and "Copy-on-Write". Page faults may also happen when frames have +been swapped out to persistent storage (swap partition or file) and evicted from +their physical locations. + +These techniques improve memory efficiency, reduce latency, and minimize space +occupation. This document won't go deeper into the details of "Lazy Allocation" +and "Copy-on-Write" because these subjects are out of scope for they belong to +Process Address Management. + +Swapping differentiate itself from the other mentioned techniques because it's +not so desirable since it's performed as a means to reduce memory under heavy +pressure. + +Swapping can't work for memory mapped by kernel logical addresses. These are a +subset of the kernel virtual space that directly maps a contiguous range of +physical memory. Given any logical address, its physical address is determined +with simple arithmetic on an offset. Accesses to logical addresses are fast +because they avoid the need for complex page table lookups at the expenses of +frames not being evictable and pageable out. + +If everything fails to make room for the data that must reside be present in +physical frames, the kernel invokes the out-of-memory (OOM) killer to make room +by terminating lower priority processes until pressure reduces under a safe +threshold. + +This document is going to simplify and show an high altitude view of how the +Linux kernel handles these page faults, creates tables and tables' entries, +check if memory is present and, if not, requests to load data from persistent +storage or from other devices, and updates the MMU and its caches... + +The first steps are architectures dependent. Most architectures jump to +`do_page_fault()`, whereas the x86 interrupt handler is defined by the +`DEFINE_IDTENTRY_RAW_ERRORCODE()` macro which calls `handle_page_fault()`. + +Whatever the routes, all architectures end up to the invocation of +`handle_mm_fault()` which, in turn, (likely) ends up calling +`__handle_mm_fault()` to carry out the actual work of allocation of the page +tables. + +The unfortunate case of not being able to call `__handle_mm_fault()` means +that the virtual address is pointing to areas of physical memory which are not +permitted to be accessed (at least from the current context). This +condition resolves to the kernel sending the above-mentioned SIGSEGV signal +to the process and leads to the consequences already explained. + +`__handle_mm_fault()` carries out its work by calling several functions to +find the entry's offsets of the upper layers of the page tables and allocate +the tables that it may need to. + +The functions that look for the offset have names like `*_offset()`, where the +"*" is for pgd, p4d, pud, pmd, pte; instead the functions to allocate the +corresponding tables, layer by layer, are called `*_alloc`, using the +above-mentioned convention to name them after the corresponding types of tables +in the hierarchy. + +The page table walk may end at one of the middle or upper layers (PMD, PUD). + +Linux supports larger page sizes than the usual 4KB (i.e., the so called +`huge pages`). When using these kinds of larger pages, higher level pages can +directly map them, with no need to use lower level page entries (PTE). Huge +pages contain large contiguos physical regions that usually span from 2MB to +1GB. They are respectively mapped by the PMD and PUD page entries. + +The huge pages bring with them several benefits like reduced TLB pressure, +reduced page table overhead, memory allocation efficiency, and performance +improvement for certain workloads. However, these benefits come with +trade-offs, like wasted memory and allocation challenges. Huge pages are out +of scope of the present document, therefore, it won't go into further details. + +At the very end of the walk with allocations, if it didn't return errors, +`__handle_mm_fault()` finally calls `handle_pte_fault()`, which via `do_fault()` +performs one of `do_read_fault()`, `do_cow_fault()`, `do_shared_fault()`. +"read", "cow", "shared" give hints about the reasons and the kind of fault it's +handling. + +The actual implementation of the workflow is very complex. Its design allows +Linux to handle page faults in a way that is tailored to the specific +characteristics of each architecture, while still sharing a common overall +structure. + +To conclude this brief overview from very high altitude of how Linux handles +page faults, let's add that page faults handler can be disabled and enabled +respectively with `pagefault_disable()` and `pagefault_enable()`. + +Several code path make use of the latter two functions because they need to +disable traps into the page faults handler, mostly to prevent deadlocks.