Message ID | 20241028145656.932941-1-wangkefeng.wang@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v3,1/2] mm: use aligned address in clear_gigantic_page() | expand |
Kefeng Wang <wangkefeng.wang@huawei.com> writes: > In current kernel, hugetlb_no_page() calls folio_zero_user() with the > fault address. Where the fault address may be not aligned with the huge > page size. Then, folio_zero_user() may call clear_gigantic_page() with > the address, while clear_gigantic_page() requires the address to be huge > page size aligned. So, this may cause memory corruption or information > leak, addtional, use more obvious naming 'addr_hint' instead of 'addr' > for clear_gigantic_page(). > > Fixes: 78fefd04c123 ("mm: memory: convert clear_huge_page() to folio_zero_user()") > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> > --- > v3: > - revise patch description, suggested by Huang Ying > - use addr_hint for clear_gigantic_page(), suggested by David > v2: > - update changelog to clarify the impact, per Andrew > > fs/hugetlbfs/inode.c | 2 +- > mm/memory.c | 3 ++- > 2 files changed, 3 insertions(+), 2 deletions(-) > > diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c > index a4441fb77f7c..a5ea006f403e 100644 > --- a/fs/hugetlbfs/inode.c > +++ b/fs/hugetlbfs/inode.c > @@ -825,7 +825,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, > error = PTR_ERR(folio); > goto out; > } > - folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size)); > + folio_zero_user(folio, addr); > __folio_mark_uptodate(folio); > error = hugetlb_add_to_page_cache(folio, mapping, index); > if (unlikely(error)) { > diff --git a/mm/memory.c b/mm/memory.c > index 75c2dfd04f72..84864387f965 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -6815,9 +6815,10 @@ static inline int process_huge_page( > return 0; > } > > -static void clear_gigantic_page(struct folio *folio, unsigned long addr, > +static void clear_gigantic_page(struct folio *folio, unsigned long addr_hint, > unsigned int nr_pages) > { > + unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(folio)); > int i; > > might_sleep(); LGTM. Thanks for fixing this. Feel free to add Reviewed-by: "Huang, Ying" <ying.huang@intel.com> In the future version. -- Best Regards, Huang, Ying
On 28.10.24 15:56, Kefeng Wang wrote: > In current kernel, hugetlb_no_page() calls folio_zero_user() with the > fault address. Where the fault address may be not aligned with the huge > page size. Then, folio_zero_user() may call clear_gigantic_page() with > the address, while clear_gigantic_page() requires the address to be huge > page size aligned. So, this may cause memory corruption or information > leak, addtional, use more obvious naming 'addr_hint' instead of 'addr' > for clear_gigantic_page(). > > Fixes: 78fefd04c123 ("mm: memory: convert clear_huge_page() to folio_zero_user()") > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> > --- > v3: > - revise patch description, suggested by Huang Ying > - use addr_hint for clear_gigantic_page(), suggested by David > v2: > - update changelog to clarify the impact, per Andrew > > fs/hugetlbfs/inode.c | 2 +- > mm/memory.c | 3 ++- > 2 files changed, 3 insertions(+), 2 deletions(-) > > diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c > index a4441fb77f7c..a5ea006f403e 100644 > --- a/fs/hugetlbfs/inode.c > +++ b/fs/hugetlbfs/inode.c > @@ -825,7 +825,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, > error = PTR_ERR(folio); > goto out; > } > - folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size)); > + folio_zero_user(folio, addr); > __folio_mark_uptodate(folio); > error = hugetlb_add_to_page_cache(folio, mapping, index); > if (unlikely(error)) { > diff --git a/mm/memory.c b/mm/memory.c > index 75c2dfd04f72..84864387f965 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -6815,9 +6815,10 @@ static inline int process_huge_page( > return 0; > } > > -static void clear_gigantic_page(struct folio *folio, unsigned long addr, > +static void clear_gigantic_page(struct folio *folio, unsigned long addr_hint, > unsigned int nr_pages) > { > + unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(folio)); > int i; > > might_sleep(); Reviewed-by: David Hildenbrand <david@redhat.com>
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index a4441fb77f7c..a5ea006f403e 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -825,7 +825,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, error = PTR_ERR(folio); goto out; } - folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size)); + folio_zero_user(folio, addr); __folio_mark_uptodate(folio); error = hugetlb_add_to_page_cache(folio, mapping, index); if (unlikely(error)) { diff --git a/mm/memory.c b/mm/memory.c index 75c2dfd04f72..84864387f965 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6815,9 +6815,10 @@ static inline int process_huge_page( return 0; } -static void clear_gigantic_page(struct folio *folio, unsigned long addr, +static void clear_gigantic_page(struct folio *folio, unsigned long addr_hint, unsigned int nr_pages) { + unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(folio)); int i; might_sleep();
In current kernel, hugetlb_no_page() calls folio_zero_user() with the fault address. Where the fault address may be not aligned with the huge page size. Then, folio_zero_user() may call clear_gigantic_page() with the address, while clear_gigantic_page() requires the address to be huge page size aligned. So, this may cause memory corruption or information leak, addtional, use more obvious naming 'addr_hint' instead of 'addr' for clear_gigantic_page(). Fixes: 78fefd04c123 ("mm: memory: convert clear_huge_page() to folio_zero_user()") Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> --- v3: - revise patch description, suggested by Huang Ying - use addr_hint for clear_gigantic_page(), suggested by David v2: - update changelog to clarify the impact, per Andrew fs/hugetlbfs/inode.c | 2 +- mm/memory.c | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-)