Message ID | 20210329232402.575396-3-mike.kravetz@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | make hugetlb put_page safe for all calling contexts | expand |
On Mon, Mar 29, 2021 at 04:23:56PM -0700, Mike Kravetz wrote: > Now that cma_release is non-blocking and irq safe, there is no need to > drop hugetlb_lock before calling. > > Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> > --- > mm/hugetlb.c | 6 ------ > 1 file changed, 6 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 3c3e4baa4156..1d62f0492e7b 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1353,14 +1353,8 @@ static void update_and_free_page(struct hstate *h, struct page *page) > set_compound_page_dtor(page, NULL_COMPOUND_DTOR); > set_page_refcounted(page); > if (hstate_is_gigantic(h)) { > - /* > - * Temporarily drop the hugetlb_lock, because > - * we might block in free_gigantic_page(). > - */ > - spin_unlock(&hugetlb_lock); > destroy_compound_gigantic_page(page, huge_page_order(h)); > free_gigantic_page(page, huge_page_order(h)); > - spin_lock(&hugetlb_lock); > } else { > __free_pages(page, huge_page_order(h)); > } > -- > 2.30.2 > Acked-by: Roman Gushchin <guro@fb.com> Thanks!
On Mon 29-03-21 16:23:56, Mike Kravetz wrote: > Now that cma_release is non-blocking and irq safe, there is no need to > drop hugetlb_lock before calling. > > Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Michal Hocko <mhocko@suse.com> > --- > mm/hugetlb.c | 6 ------ > 1 file changed, 6 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 3c3e4baa4156..1d62f0492e7b 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1353,14 +1353,8 @@ static void update_and_free_page(struct hstate *h, struct page *page) > set_compound_page_dtor(page, NULL_COMPOUND_DTOR); > set_page_refcounted(page); > if (hstate_is_gigantic(h)) { > - /* > - * Temporarily drop the hugetlb_lock, because > - * we might block in free_gigantic_page(). > - */ > - spin_unlock(&hugetlb_lock); > destroy_compound_gigantic_page(page, huge_page_order(h)); > free_gigantic_page(page, huge_page_order(h)); > - spin_lock(&hugetlb_lock); > } else { > __free_pages(page, huge_page_order(h)); > } > -- > 2.30.2 >
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3c3e4baa4156..1d62f0492e7b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1353,14 +1353,8 @@ static void update_and_free_page(struct hstate *h, struct page *page) set_compound_page_dtor(page, NULL_COMPOUND_DTOR); set_page_refcounted(page); if (hstate_is_gigantic(h)) { - /* - * Temporarily drop the hugetlb_lock, because - * we might block in free_gigantic_page(). - */ - spin_unlock(&hugetlb_lock); destroy_compound_gigantic_page(page, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); - spin_lock(&hugetlb_lock); } else { __free_pages(page, huge_page_order(h)); }
Now that cma_release is non-blocking and irq safe, there is no need to drop hugetlb_lock before calling. Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> --- mm/hugetlb.c | 6 ------ 1 file changed, 6 deletions(-)