diff mbox series

[RFC,18/18] Documentation: add document for pte_ref

Message ID 20220429133552.33768-19-zhengqi.arch@bytedance.com (mailing list archive)
State New
Headers show
Series Try to free user PTE page table pages | expand

Commit Message

Qi Zheng April 29, 2022, 1:35 p.m. UTC
This commit adds document for pte_ref under `Documentation/vm/`.

Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
---
 Documentation/vm/index.rst   |   1 +
 Documentation/vm/pte_ref.rst | 210 +++++++++++++++++++++++++++++++++++
 2 files changed, 211 insertions(+)
 create mode 100644 Documentation/vm/pte_ref.rst

Comments

Bagas Sanjaya April 30, 2022, 1:19 p.m. UTC | #1
Hi Qi,

On Fri, Apr 29, 2022 at 09:35:52PM +0800, Qi Zheng wrote:
> +Now in order to pursue high performance, applications mostly use some
> +high-performance user-mode memory allocators, such as jemalloc or tcmalloc.
> +These memory allocators use madvise(MADV_DONTNEED or MADV_FREE) to release
> +physical memory for the following reasons::
> +
> + First of all, we should hold as few write locks of mmap_lock as possible,
> + since the mmap_lock semaphore has long been a contention point in the
> + memory management subsystem. The mmap()/munmap() hold the write lock, and
> + the madvise(MADV_DONTNEED or MADV_FREE) hold the read lock, so using
> + madvise() instead of munmap() to released physical memory can reduce the
> + competition of the mmap_lock.
> +
> + Secondly, after using madvise() to release physical memory, there is no
> + need to build vma and allocate page tables again when accessing the same
> + virtual address again, which can also save some time.
> +

I think we can use enumerated list, like below:

-- >8 --

diff --git a/Documentation/vm/pte_ref.rst b/Documentation/vm/pte_ref.rst
index 0ac1e5a408d7c6..67b18e74fcb367 100644
--- a/Documentation/vm/pte_ref.rst
+++ b/Documentation/vm/pte_ref.rst
@@ -10,18 +10,18 @@ Preface
 Now in order to pursue high performance, applications mostly use some
 high-performance user-mode memory allocators, such as jemalloc or tcmalloc.
 These memory allocators use madvise(MADV_DONTNEED or MADV_FREE) to release
-physical memory for the following reasons::
-
- First of all, we should hold as few write locks of mmap_lock as possible,
- since the mmap_lock semaphore has long been a contention point in the
- memory management subsystem. The mmap()/munmap() hold the write lock, and
- the madvise(MADV_DONTNEED or MADV_FREE) hold the read lock, so using
- madvise() instead of munmap() to released physical memory can reduce the
- competition of the mmap_lock.
-
- Secondly, after using madvise() to release physical memory, there is no
- need to build vma and allocate page tables again when accessing the same
- virtual address again, which can also save some time.
+physical memory for the following reasons:
+
+1. We should hold as few write locks of mmap_lock as possible,
+   since the mmap_lock semaphore has long been a contention point in the
+   memory management subsystem. The mmap()/munmap() hold the write lock, and
+   the madvise(MADV_DONTNEED or MADV_FREE) hold the read lock, so using
+   madvise() instead of munmap() to released physical memory can reduce the
+   competition of the mmap_lock.
+
+2. After using madvise() to release physical memory, there is no
+   need to build vma and allocate page tables again when accessing the same
+   virtual address again, which can also save some time.
 
 The following is the largest user PTE page table memory that can be
 allocated by a single user process in a 32-bit and a 64-bit system.

> +The following is the largest user PTE page table memory that can be
> +allocated by a single user process in a 32-bit and a 64-bit system.
> +

We can say "assuming 4K page size" here,

> ++---------------------------+--------+---------+
> +|                           | 32-bit | 64-bit  |
> ++===========================+========+=========+
> +| user PTE page table pages | 3 MiB  | 512 GiB |
> ++---------------------------+--------+---------+
> +| user PMD page table pages | 3 KiB  | 1 GiB   |
> ++---------------------------+--------+---------+
> +
> +(for 32-bit, take 3G user address space, 4K page size as an example;
> + for 64-bit, take 48-bit address width, 4K page size as an example.)
> +

... instead of here.

> +There is also a lock-less scenario(such as fast GUP). Fortunately, we don't need
> +to do any additional operations to ensure that the system is in order. Take fast
> +GUP as an example::
> +
> +	thread A		thread B
> +	fast GUP		madvise(MADV_DONTNEED)
> +	========		======================
> +
> +	get_user_pages_fast_only()
> +	--> local_irq_save();
> +				call_rcu(pte_free_rcu)
> +	    gup_pgd_range();
> +	    local_irq_restore();
> +	    			/* do pte_free_rcu() */
> +

I see whitespace warning circa do pte_free_rcu() line above when
applying this series.

Thanks.
Qi Zheng April 30, 2022, 1:32 p.m. UTC | #2
On 2022/4/30 9:19 PM, Bagas Sanjaya wrote:
> Hi Qi,
> 
> On Fri, Apr 29, 2022 at 09:35:52PM +0800, Qi Zheng wrote:
>> +Now in order to pursue high performance, applications mostly use some
>> +high-performance user-mode memory allocators, such as jemalloc or tcmalloc.
>> +These memory allocators use madvise(MADV_DONTNEED or MADV_FREE) to release
>> +physical memory for the following reasons::
>> +
>> + First of all, we should hold as few write locks of mmap_lock as possible,
>> + since the mmap_lock semaphore has long been a contention point in the
>> + memory management subsystem. The mmap()/munmap() hold the write lock, and
>> + the madvise(MADV_DONTNEED or MADV_FREE) hold the read lock, so using
>> + madvise() instead of munmap() to released physical memory can reduce the
>> + competition of the mmap_lock.
>> +
>> + Secondly, after using madvise() to release physical memory, there is no
>> + need to build vma and allocate page tables again when accessing the same
>> + virtual address again, which can also save some time.
>> +
> 
> I think we can use enumerated list, like below:

Thanks for your review, LGTM, will do.

> 
> -- >8 --
> 
> diff --git a/Documentation/vm/pte_ref.rst b/Documentation/vm/pte_ref.rst
> index 0ac1e5a408d7c6..67b18e74fcb367 100644
> --- a/Documentation/vm/pte_ref.rst
> +++ b/Documentation/vm/pte_ref.rst
> @@ -10,18 +10,18 @@ Preface
>   Now in order to pursue high performance, applications mostly use some
>   high-performance user-mode memory allocators, such as jemalloc or tcmalloc.
>   These memory allocators use madvise(MADV_DONTNEED or MADV_FREE) to release
> -physical memory for the following reasons::
> -
> - First of all, we should hold as few write locks of mmap_lock as possible,
> - since the mmap_lock semaphore has long been a contention point in the
> - memory management subsystem. The mmap()/munmap() hold the write lock, and
> - the madvise(MADV_DONTNEED or MADV_FREE) hold the read lock, so using
> - madvise() instead of munmap() to released physical memory can reduce the
> - competition of the mmap_lock.
> -
> - Secondly, after using madvise() to release physical memory, there is no
> - need to build vma and allocate page tables again when accessing the same
> - virtual address again, which can also save some time.
> +physical memory for the following reasons:
> +
> +1. We should hold as few write locks of mmap_lock as possible,
> +   since the mmap_lock semaphore has long been a contention point in the
> +   memory management subsystem. The mmap()/munmap() hold the write lock, and
> +   the madvise(MADV_DONTNEED or MADV_FREE) hold the read lock, so using
> +   madvise() instead of munmap() to released physical memory can reduce the
> +   competition of the mmap_lock.
> +
> +2. After using madvise() to release physical memory, there is no
> +   need to build vma and allocate page tables again when accessing the same
> +   virtual address again, which can also save some time.
>   
>   The following is the largest user PTE page table memory that can be
>   allocated by a single user process in a 32-bit and a 64-bit system.
> 
>> +The following is the largest user PTE page table memory that can be
>> +allocated by a single user process in a 32-bit and a 64-bit system.
>> +
> 
> We can say "assuming 4K page size" here,
> 
>> ++---------------------------+--------+---------+
>> +|                           | 32-bit | 64-bit  |
>> ++===========================+========+=========+
>> +| user PTE page table pages | 3 MiB  | 512 GiB |
>> ++---------------------------+--------+---------+
>> +| user PMD page table pages | 3 KiB  | 1 GiB   |
>> ++---------------------------+--------+---------+
>> +
>> +(for 32-bit, take 3G user address space, 4K page size as an example;
>> + for 64-bit, take 48-bit address width, 4K page size as an example.)
>> +
> 
> ... instead of here.

will do.

> 
>> +There is also a lock-less scenario(such as fast GUP). Fortunately, we don't need
>> +to do any additional operations to ensure that the system is in order. Take fast
>> +GUP as an example::
>> +
>> +	thread A		thread B
>> +	fast GUP		madvise(MADV_DONTNEED)
>> +	========		======================
>> +
>> +	get_user_pages_fast_only()
>> +	--> local_irq_save();
>> +				call_rcu(pte_free_rcu)
>> +	    gup_pgd_range();
>> +	    local_irq_restore();
>> +	    			/* do pte_free_rcu() */
>> +
> 
> I see whitespace warning circa do pte_free_rcu() line above when
> applying this series.

will fix.

Thanks,
Qi

> 
> Thanks.
>
diff mbox series

Patch

diff --git a/Documentation/vm/index.rst b/Documentation/vm/index.rst
index 44365c4574a3..ee71baccc2e7 100644
--- a/Documentation/vm/index.rst
+++ b/Documentation/vm/index.rst
@@ -31,6 +31,7 @@  algorithms.  If you are looking for advice on simply allocating memory, see the
    page_frags
    page_owner
    page_table_check
+   pte_ref
    remap_file_pages
    slub
    split_page_table_lock
diff --git a/Documentation/vm/pte_ref.rst b/Documentation/vm/pte_ref.rst
new file mode 100644
index 000000000000..0ac1e5a408d7
--- /dev/null
+++ b/Documentation/vm/pte_ref.rst
@@ -0,0 +1,210 @@ 
+.. SPDX-License-Identifier: GPL-2.0
+
+============================================================================
+pte_ref: Tracking about how many references to each user PTE page table page
+============================================================================
+
+Preface
+=======
+
+Now in order to pursue high performance, applications mostly use some
+high-performance user-mode memory allocators, such as jemalloc or tcmalloc.
+These memory allocators use madvise(MADV_DONTNEED or MADV_FREE) to release
+physical memory for the following reasons::
+
+ First of all, we should hold as few write locks of mmap_lock as possible,
+ since the mmap_lock semaphore has long been a contention point in the
+ memory management subsystem. The mmap()/munmap() hold the write lock, and
+ the madvise(MADV_DONTNEED or MADV_FREE) hold the read lock, so using
+ madvise() instead of munmap() to released physical memory can reduce the
+ competition of the mmap_lock.
+
+ Secondly, after using madvise() to release physical memory, there is no
+ need to build vma and allocate page tables again when accessing the same
+ virtual address again, which can also save some time.
+
+The following is the largest user PTE page table memory that can be
+allocated by a single user process in a 32-bit and a 64-bit system.
+
++---------------------------+--------+---------+
+|                           | 32-bit | 64-bit  |
++===========================+========+=========+
+| user PTE page table pages | 3 MiB  | 512 GiB |
++---------------------------+--------+---------+
+| user PMD page table pages | 3 KiB  | 1 GiB   |
++---------------------------+--------+---------+
+
+(for 32-bit, take 3G user address space, 4K page size as an example;
+ for 64-bit, take 48-bit address width, 4K page size as an example.)
+
+After using madvise(), everything looks good, but as can be seen from the
+above table, a single process can create a large number of PTE page tables
+on a 64-bit system, since both of the MADV_DONTNEED and MADV_FREE will not
+release page table memory. And before the process exits or calls munmap(),
+the kernel cannot reclaim these pages even if these PTE page tables do not
+map anything.
+
+To fix the situation, we introduces a reference count for each user PTE page
+table page. Then we can track whether users are using the user PTE page table
+page and reclaim the user PTE page table pages that does not map anything at
+the right time.
+
+Introduction
+============
+
+The ``pte_ref``, which is the reference count of user PTE page table page, is
+``percpu_ref`` type. It is used to track the usage of each user PTE page table
+page.
+
+Who will hold the pte_ref?
+--------------------------
+
+The following people will hold a pte_ref::
+
+ The !pte_none() entry, such as regular page table entry that map physical
+ pages, or swap entry, or migrate entry, etc.
+
+ Visitor to the PTE page table entries, such as page table walker.
+
+Any ``!pte_none()`` entry and visitor can be regarded as the user of the PTE
+page table page. When the pte_ref is reduced to 0, it means that no one is
+using the PTE page table page, then this free PTE page table page can be
+reclaimed at this time.
+
+About mode switching
+--------------------
+
+When user PTE page table page is allocated, its ``pte_ref`` will be initialized
+to percpu mode, which basically does not bring performance overhead. When we
+want to reclaim the PTE page, it will be switched to atomic mode. Then we can
+check if the ``pte_ref`` is zero::
+
+ - If it is zero, we can safely reclaim it immediately;
+ - If it is not zero but we expect that the PTE page can be reclaimed
+   automatically when no one is using it, we can keep its ``pte_ref`` in
+   atomic mode (e.g. MADV_FREE case);
+ - If it is not zero, and we will continue to try at the next opportunity,
+   then we can choose to switch back to percpu mode (e.g. MADV_DONTNEED case).
+
+Competitive relationship
+------------------------
+
+Now, the user page table will only be released by calling ``free_pgtables()``
+when the process exits or ``unmap_region()`` is called (e.g. ``munmap()`` path).
+So other threads only need to ensure mutual exclusion with these paths to ensure
+that the page table is not released. For example::
+
+	thread A			thread B
+	page table walker		munmap
+	=================		======
+
+	mmap_read_lock()
+	if (!pte_none() && pte_present() && !pmd_trans_unstable()) {
+		pte_offset_map_lock()
+		*walk page table*
+		pte_unmap_unlock()
+	}
+	mmap_read_unlock()
+
+					mmap_write_lock_killable()
+					detach_vmas_to_be_unmapped()
+					unmap_region()
+					--> free_pgtables()
+
+But after we introduce the ``pte_ref`` for the user PTE page table page, these
+existing balances will be broken. The page can be released at any time when its
+``pte_ref`` is reduced to 0. Therefore, the following case may happen::
+
+	thread A		thread B			thread C
+	page table walker	madvise(MADV_DONTNEED)		page fault
+	=================	======================		==========
+
+	mmap_read_lock()
+	if (!pte_none() && pte_present() && !pmd_trans_unstable()) {
+
+				mmap_read_lock()
+				unmap_page_range()
+				--> zap_pte_range()
+				    /* the pte_ref is reduced to 0 */
+				    --> free PTE page table page
+
+								mmap_read_lock()
+								/* may allocate
+								 * a new huge
+								 * pmd or a new
+								 * PTE page
+								 */
+
+		/* broken!! */
+		pte_offset_map_lock()
+
+As we can see, all of the thread A, B and C hold the read lock of mmap_lock, so
+they can execute concurrently. When thread B releases the PTE page table page,
+the value in the corresponding pmd entry will become unstable, which may be
+none or huge pmd, or map a new PTE page table page again. This will cause system
+chaos and even panic.
+
+So as described in the section "Who will hold the pte_ref?", the page table
+walker (visitor) also need to try to take a ``pte_ref`` to the user PTE page
+table page before walking page table (the helper ``pte_tryget_map{_lock}()``
+can help us to do this), then the system will become orderly again::
+
+	thread A		thread B
+	page table walker	madvise(MADV_DONTNEED)
+	=================	======================
+
+	mmap_read_lock()
+	if (!pte_none() && pte_present() && !pmd_trans_unstable()) {
+		pte_tryget()
+		--> percpu_ref_tryget
+		*if successfully, then:*
+
+				mmap_read_lock()
+				unmap_page_range()
+				--> zap_pte_range()
+				    /* the pte_refcount is reduced to 1 */
+
+		pte_offset_map_lock()
+		*walk page table*
+		pte_unmap_unlock()
+
+There is also a lock-less scenario(such as fast GUP). Fortunately, we don't need
+to do any additional operations to ensure that the system is in order. Take fast
+GUP as an example::
+
+	thread A		thread B
+	fast GUP		madvise(MADV_DONTNEED)
+	========		======================
+
+	get_user_pages_fast_only()
+	--> local_irq_save();
+				call_rcu(pte_free_rcu)
+	    gup_pgd_range();
+	    local_irq_restore();
+	    			/* do pte_free_rcu() */
+
+Helpers
+=======
+
++----------------------+------------------------------------------------+
+| pte_ref_init         | Initialize the pte_ref                         |
++----------------------+------------------------------------------------+
+| pte_ref_free         | Free the pte_ref                               |
++----------------------+------------------------------------------------+
+| pte_tryget           | Try to hold a pte_ref                          |
++----------------------+------------------------------------------------+
+| pte_put              | Decrement a pte_ref                            |
++----------------------+------------------------------------------------+
+| pte_tryget_map       | Do pte_tryget and pte_offset_map               |
++----------------------+------------------------------------------------+
+| pte_tryget_map_lock  | Do pte_tryget and pte_offset_map_lock          |
++----------------------+------------------------------------------------+
+| free_user_pte        | Free the user PTE page table page              |
++----------------------+------------------------------------------------+
+| try_to_free_user_pte | Try to free the user PTE page table page       |
++----------------------+------------------------------------------------+
+| track_pte_set        | Track the setting of user PTE page table page  |
++----------------------+------------------------------------------------+
+| track_pte_clear      | Track the clearing of user PTE page table page |
++----------------------+------------------------------------------------+
+