Message ID | 20200901014636.29737-6-richard.weiyang@linux.alibaba.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/hugetlb: code refine and simplification | expand |
On 9/1/20 3:46 AM, Wei Yang wrote: > The page allocated from buddy is not on any list, so just use list_add() > is enough. > > Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com> > Reviewed-by: Baoquan He <bhe@redhat.com> > Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> > --- > mm/hugetlb.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 441b7f7c623e..c9b292e664c4 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -2405,7 +2405,7 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, > h->resv_huge_pages--; > } > spin_lock(&hugetlb_lock); > - list_move(&page->lru, &h->hugepage_activelist); > + list_add(&page->lru, &h->hugepage_activelist); Hmm, how does that list_move() actually not crash today? Page has been taken from free lists, thus there was list_del() and page->lru should be poisoned. list_move() does __list_del_entry() which will either detect the poison with CONFIG_DEBUG_LIST, or crash accessing the poison, no? Am I missing something or does it mean this code is actually never executed in wild? > /* Fall through */ Maybe delete this comment? This is not a switch statement. > } > hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, page); >
On 9/2/20 3:49 AM, Vlastimil Babka wrote: > On 9/1/20 3:46 AM, Wei Yang wrote: >> The page allocated from buddy is not on any list, so just use list_add() >> is enough. >> >> Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com> >> Reviewed-by: Baoquan He <bhe@redhat.com> >> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> >> --- >> mm/hugetlb.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index 441b7f7c623e..c9b292e664c4 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -2405,7 +2405,7 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, >> h->resv_huge_pages--; >> } >> spin_lock(&hugetlb_lock); >> - list_move(&page->lru, &h->hugepage_activelist); >> + list_add(&page->lru, &h->hugepage_activelist); > > Hmm, how does that list_move() actually not crash today? > Page has been taken from free lists, thus there was list_del() and page->lru > should be poisoned. > list_move() does __list_del_entry() which will either detect the poison with > CONFIG_DEBUG_LIST, or crash accessing the poison, no? > Am I missing something or does it mean this code is actually never executed in wild? > There is not enough context in the diff, but the hugetlb page was not taken from the free list. Rather, it was just created by a call to alloc_buddy_huge_page_with_mpol(). As part of the allocation/creation prep_new_huge_page will be called which will INIT_LIST_HEAD(&page->lru).
On 9/2/20 7:25 PM, Mike Kravetz wrote: > On 9/2/20 3:49 AM, Vlastimil Babka wrote: >> On 9/1/20 3:46 AM, Wei Yang wrote: >>> The page allocated from buddy is not on any list, so just use list_add() >>> is enough. >>> >>> Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com> >>> Reviewed-by: Baoquan He <bhe@redhat.com> >>> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> >>> --- >>> mm/hugetlb.c | 2 +- >>> 1 file changed, 1 insertion(+), 1 deletion(-) >>> >>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>> index 441b7f7c623e..c9b292e664c4 100644 >>> --- a/mm/hugetlb.c >>> +++ b/mm/hugetlb.c >>> @@ -2405,7 +2405,7 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, >>> h->resv_huge_pages--; >>> } >>> spin_lock(&hugetlb_lock); >>> - list_move(&page->lru, &h->hugepage_activelist); >>> + list_add(&page->lru, &h->hugepage_activelist); >> >> Hmm, how does that list_move() actually not crash today? >> Page has been taken from free lists, thus there was list_del() and page->lru >> should be poisoned. >> list_move() does __list_del_entry() which will either detect the poison with >> CONFIG_DEBUG_LIST, or crash accessing the poison, no? >> Am I missing something or does it mean this code is actually never executed in wild? >> > > There is not enough context in the diff, but the hugetlb page was not taken > from the free list. Rather, it was just created by a call to > alloc_buddy_huge_page_with_mpol(). As part of the allocation/creation > prep_new_huge_page will be called which will INIT_LIST_HEAD(&page->lru). Ah so indeed I was missing something :) Thanks. Then this is indeed a an optimization and not a bugfix and doesn't need stable@. Sorry for the noise.
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 441b7f7c623e..c9b292e664c4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2405,7 +2405,7 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, h->resv_huge_pages--; } spin_lock(&hugetlb_lock); - list_move(&page->lru, &h->hugepage_activelist); + list_add(&page->lru, &h->hugepage_activelist); /* Fall through */ } hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, page);