diff mbox series

[v3,09/16] mm/rmap: use page_move_anon_rmap() when reusing a mapped PageAnon() page exclusively

Message ID 20220329160440.193848-10-david@redhat.com (mailing list archive)
State New
Headers show
Series mm: COW fixes part 2: reliable GUP pins of anonymous pages | expand

Commit Message

David Hildenbrand March 29, 2022, 4:04 p.m. UTC
We want to mark anonymous pages exclusive, and when using
page_move_anon_rmap() we know that we are the exclusive user, as
properly documented. This is a preparation for marking anonymous pages
exclusive in page_move_anon_rmap().

In both instances, we're holding page lock and are sure that we're the
exclusive owner (page_count() == 1). hugetlb already properly uses
page_move_anon_rmap() in the write fault handler.

Note that in case of a PTE-mapped THP, we'll only end up calling this
function if the whole THP is only referenced by the single PTE mapping
a single subpage (page_count() == 1); consequently, it's fine to modify
the compound page mapping inside page_move_anon_rmap().

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/huge_memory.c | 2 ++
 mm/memory.c      | 1 +
 2 files changed, 3 insertions(+)

Comments

Vlastimil Babka April 12, 2022, 9:26 a.m. UTC | #1
On 3/29/22 18:04, David Hildenbrand wrote:
> We want to mark anonymous pages exclusive, and when using
> page_move_anon_rmap() we know that we are the exclusive user, as
> properly documented. This is a preparation for marking anonymous pages
> exclusive in page_move_anon_rmap().
> 
> In both instances, we're holding page lock and are sure that we're the
> exclusive owner (page_count() == 1). hugetlb already properly uses
> page_move_anon_rmap() in the write fault handler.

Yeah, note that do_wp_page() used to call page_move_anon_rmap() always since
the latter was introduced, until commit 09854ba94c6a ("mm: do_wp_page()
simplification"). Probably not intended.

> Note that in case of a PTE-mapped THP, we'll only end up calling this
> function if the whole THP is only referenced by the single PTE mapping
> a single subpage (page_count() == 1); consequently, it's fine to modify
> the compound page mapping inside page_move_anon_rmap().
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/huge_memory.c | 2 ++
>  mm/memory.c      | 1 +
>  2 files changed, 3 insertions(+)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index c4526343565a..dd16819c5edc 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1317,6 +1317,8 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
>  		try_to_free_swap(page);
>  	if (page_count(page) == 1) {
>  		pmd_t entry;
> +
> +		page_move_anon_rmap(page, vma);
>  		entry = pmd_mkyoung(orig_pmd);
>  		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
>  		if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1))
> diff --git a/mm/memory.c b/mm/memory.c
> index 03e29c9614e0..4303c0fdcf17 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3303,6 +3303,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
>  		 * and the page is locked, it's dark out, and we're wearing
>  		 * sunglasses. Hit it.
>  		 */
> +		page_move_anon_rmap(page, vma);
>  		unlock_page(page);
>  		wp_page_reuse(vmf);
>  		return VM_FAULT_WRITE;
David Hildenbrand April 12, 2022, 9:28 a.m. UTC | #2
On 12.04.22 11:26, Vlastimil Babka wrote:
> On 3/29/22 18:04, David Hildenbrand wrote:
>> We want to mark anonymous pages exclusive, and when using
>> page_move_anon_rmap() we know that we are the exclusive user, as
>> properly documented. This is a preparation for marking anonymous pages
>> exclusive in page_move_anon_rmap().
>>
>> In both instances, we're holding page lock and are sure that we're the
>> exclusive owner (page_count() == 1). hugetlb already properly uses
>> page_move_anon_rmap() in the write fault handler.
> 
> Yeah, note that do_wp_page() used to call page_move_anon_rmap() always since
> the latter was introduced, until commit 09854ba94c6a ("mm: do_wp_page()
> simplification"). Probably not intended.

Yeah, it was buried underneath all that reuse_swap_page() complexity.
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c4526343565a..dd16819c5edc 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1317,6 +1317,8 @@  vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 		try_to_free_swap(page);
 	if (page_count(page) == 1) {
 		pmd_t entry;
+
+		page_move_anon_rmap(page, vma);
 		entry = pmd_mkyoung(orig_pmd);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
 		if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1))
diff --git a/mm/memory.c b/mm/memory.c
index 03e29c9614e0..4303c0fdcf17 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3303,6 +3303,7 @@  static vm_fault_t do_wp_page(struct vm_fault *vmf)
 		 * and the page is locked, it's dark out, and we're wearing
 		 * sunglasses. Hit it.
 		 */
+		page_move_anon_rmap(page, vma);
 		unlock_page(page);
 		wp_page_reuse(vmf);
 		return VM_FAULT_WRITE;