diff mbox series

[v3,02/16] mm/huge_memory: access vm_page_prot with READ_ONCE in remove_migration_pmd

Message ID 20220704132201.14611-3-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series A few cleanup patches for huge_memory | expand

Commit Message

Miaohe Lin July 4, 2022, 1:21 p.m. UTC
vma->vm_page_prot is read lockless from the rmap_walk, it may be updated
concurrently. Using READ_ONCE to prevent the risk of reading intermediate
values.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/huge_memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f4e581eefb67..a010f9ba15ce 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3309,7 +3309,7 @@  void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
 
 	entry = pmd_to_swp_entry(*pvmw->pmd);
 	get_page(new);
-	pmde = pmd_mkold(mk_huge_pmd(new, vma->vm_page_prot));
+	pmde = pmd_mkold(mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot)));
 	if (pmd_swp_soft_dirty(*pvmw->pmd))
 		pmde = pmd_mksoft_dirty(pmde);
 	if (is_writable_migration_entry(entry))