diff mbox series

[v2,2/2] mm: use aligned address in copy_user_gigantic_page()

Message ID 20241026054307.3896926-2-wangkefeng.wang@huawei.com (mailing list archive)
State New
Headers show
Series [v2,1/2] mm: use aligned address in clear_gigantic_page() | expand

Commit Message

Kefeng Wang Oct. 26, 2024, 5:43 a.m. UTC
When copying gigantic page, it copies page from the first page to the
last page, if directly passing addr_hint which maybe not the address
of the first page of folio, then some archs could flush the wrong cache
if it does use the addr_hint as a hint. For non-gigantic page, it
calculates the base address inside, even passed the wrong addr_hint, it
only has performance impact as the process_huge_page() wants to process
target page last to keep its cache lines hot), no functional impact.

Let's pass the real accessed address to copy_user_large_folio() and use
the aligned address in copy_user_gigantic_page() to fix it.

Fixes: 530dd9926dc1 ("mm: memory: improve copy_user_large_folio()")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
v2: 
- update changelog to clarify the impact, per Andrew

 mm/hugetlb.c | 5 ++---
 mm/memory.c  | 1 +
 2 files changed, 3 insertions(+), 3 deletions(-)

Comments

David Hildenbrand Oct. 28, 2024, 10:01 a.m. UTC | #1
On 26.10.24 07:43, Kefeng Wang wrote:
> When copying gigantic page, it copies page from the first page to the
> last page, if directly passing addr_hint which maybe not the address
> of the first page of folio, then some archs could flush the wrong cache
> if it does use the addr_hint as a hint. For non-gigantic page, it
> calculates the base address inside, even passed the wrong addr_hint, it
> only has performance impact as the process_huge_page() wants to process
> target page last to keep its cache lines hot), no functional impact.
> 
> Let's pass the real accessed address to copy_user_large_folio() and use
> the aligned address in copy_user_gigantic_page() to fix it.
> 
> Fixes: 530dd9926dc1 ("mm: memory: improve copy_user_large_folio()")
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
> v2:
> - update changelog to clarify the impact, per Andrew
> 
>   mm/hugetlb.c | 5 ++---
>   mm/memory.c  | 1 +
>   2 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 2c8c5da0f5d3..15b5d46d49d2 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5338,7 +5338,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
>   					break;
>   				}
>   				ret = copy_user_large_folio(new_folio, pte_folio,
> -						ALIGN_DOWN(addr, sz), dst_vma);
> +							    addr, dst_vma);
>   				folio_put(pte_folio);
>   				if (ret) {
>   					folio_put(new_folio);
> @@ -6641,8 +6641,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
>   			*foliop = NULL;
>   			goto out;
>   		}
> -		ret = copy_user_large_folio(folio, *foliop,
> -					    ALIGN_DOWN(dst_addr, size), dst_vma);
> +		ret = copy_user_large_folio(folio, *foliop, dst_addr, dst_vma);
>   		folio_put(*foliop);
>   		*foliop = NULL;
>   		if (ret) {
> diff --git a/mm/memory.c b/mm/memory.c
> index ef47b7ea5ddd..e5284bab659d 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -6860,6 +6860,7 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
>   	struct page *dst_page;
>   	struct page *src_page;
>   
> +	addr = ALIGN_DOWN(addr, folio_size(dst));

Same thing, please make it clearer that there is an "addr_hint" and an 
"addr".
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 2c8c5da0f5d3..15b5d46d49d2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5338,7 +5338,7 @@  int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 					break;
 				}
 				ret = copy_user_large_folio(new_folio, pte_folio,
-						ALIGN_DOWN(addr, sz), dst_vma);
+							    addr, dst_vma);
 				folio_put(pte_folio);
 				if (ret) {
 					folio_put(new_folio);
@@ -6641,8 +6641,7 @@  int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
 			*foliop = NULL;
 			goto out;
 		}
-		ret = copy_user_large_folio(folio, *foliop,
-					    ALIGN_DOWN(dst_addr, size), dst_vma);
+		ret = copy_user_large_folio(folio, *foliop, dst_addr, dst_vma);
 		folio_put(*foliop);
 		*foliop = NULL;
 		if (ret) {
diff --git a/mm/memory.c b/mm/memory.c
index ef47b7ea5ddd..e5284bab659d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6860,6 +6860,7 @@  static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
 	struct page *dst_page;
 	struct page *src_page;
 
+	addr = ALIGN_DOWN(addr, folio_size(dst));
 	for (i = 0; i < nr_pages; i++) {
 		dst_page = folio_page(dst, i);
 		src_page = folio_page(src, i);