diff mbox series

[v3,2/2] mm: use aligned address in copy_user_gigantic_page()

Message ID 20241028145656.932941-2-wangkefeng.wang@huawei.com (mailing list archive)
State New
Headers show
Series [v3,1/2] mm: use aligned address in clear_gigantic_page() | expand

Commit Message

Kefeng Wang Oct. 28, 2024, 2:56 p.m. UTC
In current kernel, hugetlb_wp() calls copy_user_large_folio() with the
fault address. Where the fault address may be not aligned with the huge
page size. Then, copy_user_large_folio() may call copy_user_gigantic_page()
with the address, while copy_user_gigantic_page() requires the address
to be huge page size aligned. So, this may cause memory corruption or
information leak, addtional, use more obvious naming 'addr_hint' instead
of 'addr' for copy_user_gigantic_page().

Fixes: 530dd9926dc1 ("mm: memory: improve copy_user_large_folio()")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
v3:
- revise patch description, suggested by Huang Ying
- use addr_hint for copy_user_gigantic_page(), suggested by David
v2: 
- update changelog to clarify the impact, per Andrew

 mm/hugetlb.c | 5 ++---
 mm/memory.c  | 5 +++--
 2 files changed, 5 insertions(+), 5 deletions(-)

Comments

David Hildenbrand Oct. 29, 2024, 9:51 a.m. UTC | #1
On 28.10.24 15:56, Kefeng Wang wrote:
> In current kernel, hugetlb_wp() calls copy_user_large_folio() with the
> fault address. Where the fault address may be not aligned with the huge
> page size. Then, copy_user_large_folio() may call copy_user_gigantic_page()
> with the address, while copy_user_gigantic_page() requires the address
> to be huge page size aligned. So, this may cause memory corruption or
> information leak, addtional, use more obvious naming 'addr_hint' instead
> of 'addr' for copy_user_gigantic_page().
> 
> Fixes: 530dd9926dc1 ("mm: memory: improve copy_user_large_folio()")
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---

Reviewed-by: David Hildenbrand <david@redhat.com>
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 2c8c5da0f5d3..15b5d46d49d2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5338,7 +5338,7 @@  int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 					break;
 				}
 				ret = copy_user_large_folio(new_folio, pte_folio,
-						ALIGN_DOWN(addr, sz), dst_vma);
+							    addr, dst_vma);
 				folio_put(pte_folio);
 				if (ret) {
 					folio_put(new_folio);
@@ -6641,8 +6641,7 @@  int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
 			*foliop = NULL;
 			goto out;
 		}
-		ret = copy_user_large_folio(folio, *foliop,
-					    ALIGN_DOWN(dst_addr, size), dst_vma);
+		ret = copy_user_large_folio(folio, *foliop, dst_addr, dst_vma);
 		folio_put(*foliop);
 		*foliop = NULL;
 		if (ret) {
diff --git a/mm/memory.c b/mm/memory.c
index 84864387f965..209885a4134f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6852,13 +6852,14 @@  void folio_zero_user(struct folio *folio, unsigned long addr_hint)
 }
 
 static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
-				   unsigned long addr,
+				   unsigned long addr_hint,
 				   struct vm_area_struct *vma,
 				   unsigned int nr_pages)
 {
-	int i;
+	unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(dst));
 	struct page *dst_page;
 	struct page *src_page;
+	int i;
 
 	for (i = 0; i < nr_pages; i++) {
 		dst_page = folio_page(dst, i);