Message ID | 20220909021653.3371879-1-liushixin2@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v3] mm/huge_memory: prevent THP_ZERO_PAGE_ALLOC increased twice | expand |
On Fri, 9 Sep 2022 10:16:53 +0800 Liu Shixin <liushixin2@huawei.com> wrote: > Since user who read THP_ZERO_PAGE_ALLOC may be more concerned about the > huge zero pages that are really allocated using for thp and can indicated > the times of calling huge_zero_page_shrinker. It is misleading to increase > twice if two threads call get_huge_zero_page concurrently. Don't increase > the value if the huge page is not really used. > I cant say I really understand the point about huge_zero_page_shrinker(), so I propose this changelog: : A user who reads THP_ZERO_PAGE_ALLOC may be more concerned about the huge : zero pages that are really allocated for thp. It is misleading to : increase THP_ZERO_PAGE_ALLOC twice if two threads call get_huge_zero_page : concurrently. Don't increase the value if the huge page is not really : used. The patch makes sense to me. What do others think?
diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index c9c37f16eef8..8e3418ec4503 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -366,10 +366,9 @@ thp_split_pmd page table entry. thp_zero_page_alloc - is incremented every time a huge zero page is - successfully allocated. It includes allocations which where - dropped due race with other allocation. Note, it doesn't count - every map of the huge zero page, only its allocation. + is incremented every time a huge zero page used for thp is + successfully allocated. Note, it doesn't count every map of + the huge zero page, only its allocation. thp_zero_page_alloc_failed is incremented if kernel fails to allocate diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 88d98241a635..5c83a424803a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -163,7 +163,6 @@ static bool get_huge_zero_page(void) count_vm_event(THP_ZERO_PAGE_ALLOC_FAILED); return false; } - count_vm_event(THP_ZERO_PAGE_ALLOC); preempt_disable(); if (cmpxchg(&huge_zero_page, NULL, zero_page)) { preempt_enable(); @@ -175,6 +174,7 @@ static bool get_huge_zero_page(void) /* We take additional reference here. It will be put back by shrinker */ atomic_set(&huge_zero_refcount, 2); preempt_enable(); + count_vm_event(THP_ZERO_PAGE_ALLOC); return true; }
Since user who read THP_ZERO_PAGE_ALLOC may be more concerned about the huge zero pages that are really allocated using for thp and can indicated the times of calling huge_zero_page_shrinker. It is misleading to increase twice if two threads call get_huge_zero_page concurrently. Don't increase the value if the huge page is not really used. Update Documentation/admin-guide/mm/transhuge.rst together. Signed-off-by: Liu Shixin <liushixin2@huawei.com> --- v2->v3: Update the commit message. v1->v2: Update documnet. Documentation/admin-guide/mm/transhuge.rst | 7 +++---- mm/huge_memory.c | 2 +- 2 files changed, 4 insertions(+), 5 deletions(-)