Message ID | 20240618020926.1911903-1-zhaoyang.huang@unisoc.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: fix hard lockup in __split_huge_page | expand |
On Tue, Jun 18, 2024 at 10:09:26AM +0800, zhaoyang.huang wrote: > Hard lockup[2] is reported which should be caused by recursive > lock contention of lruvec->lru_lock[1] within __split_huge_page. > > [1] > static void __split_huge_page(struct page *page, struct list_head *list, > pgoff_t end, unsigned int new_order) > { > /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ > //1st lock here > lruvec = folio_lruvec_lock(folio); > > for (i = nr - new_nr; i >= new_nr; i -= new_nr) { > __split_huge_page_tail(folio, i, lruvec, list, new_order); > /* Some pages can be beyond EOF: drop them from page cache */ > if (head[i].index >= end) { > folio_put(tail); > __page_cache_release > //2nd lock here > folio_lruvec_relock_irqsave Why doesn't lockdep catch this? > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 9859aa4f7553..ea504df46d3b 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2925,7 +2925,9 @@ static void __split_huge_page(struct page *page, struct list_head *list, > folio_account_cleaned(tail, > inode_to_wb(folio->mapping->host)); > __filemap_remove_folio(tail, NULL); > + unlock_page_lruvec(lruvec); > folio_put(tail); > + folio_lruvec_lock(folio); Why is it safe to drop & reacquire this lock? Is there nothing we need to revalidate?
On Tue, Jun 18, 2024 at 11:19 AM Matthew Wilcox <willy@infradead.org> wrote: > > On Tue, Jun 18, 2024 at 10:09:26AM +0800, zhaoyang.huang wrote: > > Hard lockup[2] is reported which should be caused by recursive > > lock contention of lruvec->lru_lock[1] within __split_huge_page. > > > > [1] > > static void __split_huge_page(struct page *page, struct list_head *list, > > pgoff_t end, unsigned int new_order) > > { > > /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ > > //1st lock here > > lruvec = folio_lruvec_lock(folio); > > > > for (i = nr - new_nr; i >= new_nr; i -= new_nr) { > > __split_huge_page_tail(folio, i, lruvec, list, new_order); > > /* Some pages can be beyond EOF: drop them from page cache */ > > if (head[i].index >= end) { > > folio_put(tail); > > __page_cache_release > > //2nd lock here > > folio_lruvec_relock_irqsave > > Why doesn't lockdep catch this? It is reported by a regression test of the fix patch which aims at the find_get_entry livelock issue as below. I don't know the details of the kernel configuration. https://lore.kernel.org/linux-mm/5f989315-e380-46aa-80d1-ce8608889e5f@marcinwanat.pl/ > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > index 9859aa4f7553..ea504df46d3b 100644 > > --- a/mm/huge_memory.c > > +++ b/mm/huge_memory.c > > @@ -2925,7 +2925,9 @@ static void __split_huge_page(struct page *page, struct list_head *list, > > folio_account_cleaned(tail, > > inode_to_wb(folio->mapping->host)); > > __filemap_remove_folio(tail, NULL); > > + unlock_page_lruvec(lruvec); > > folio_put(tail); > > + folio_lruvec_lock(folio); > > Why is it safe to drop & reacquire this lock? Is there nothing we need > to revalidate? My stupid. I will take that into consideration in the next version. >
On Tue, Jun 18, 2024 at 11:27:12AM +0800, Zhaoyang Huang wrote: > On Tue, Jun 18, 2024 at 11:19 AM Matthew Wilcox <willy@infradead.org> wrote: > > > > On Tue, Jun 18, 2024 at 10:09:26AM +0800, zhaoyang.huang wrote: > > > Hard lockup[2] is reported which should be caused by recursive > > > lock contention of lruvec->lru_lock[1] within __split_huge_page. > > > > > > [1] > > > static void __split_huge_page(struct page *page, struct list_head *list, > > > pgoff_t end, unsigned int new_order) > > > { > > > /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ > > > //1st lock here > > > lruvec = folio_lruvec_lock(folio); > > > > > > for (i = nr - new_nr; i >= new_nr; i -= new_nr) { > > > __split_huge_page_tail(folio, i, lruvec, list, new_order); > > > /* Some pages can be beyond EOF: drop them from page cache */ > > > if (head[i].index >= end) { > > > folio_put(tail); > > > __page_cache_release > > > //2nd lock here > > > folio_lruvec_relock_irqsave > > > > Why doesn't lockdep catch this? > It is reported by a regression test of the fix patch which aims at the > find_get_entry livelock issue as below. I don't know the details of > the kernel configuration. > > https://lore.kernel.org/linux-mm/5f989315-e380-46aa-80d1-ce8608889e5f@marcinwanat.pl/ Go away.
On Tue, Jun 18, 2024 at 11:31 AM Matthew Wilcox <willy@infradead.org> wrote: > > On Tue, Jun 18, 2024 at 11:27:12AM +0800, Zhaoyang Huang wrote: > > On Tue, Jun 18, 2024 at 11:19 AM Matthew Wilcox <willy@infradead.org> wrote: > > > > > > On Tue, Jun 18, 2024 at 10:09:26AM +0800, zhaoyang.huang wrote: > > > > Hard lockup[2] is reported which should be caused by recursive > > > > lock contention of lruvec->lru_lock[1] within __split_huge_page. > > > > > > > > [1] > > > > static void __split_huge_page(struct page *page, struct list_head *list, > > > > pgoff_t end, unsigned int new_order) > > > > { > > > > /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ > > > > //1st lock here > > > > lruvec = folio_lruvec_lock(folio); > > > > > > > > for (i = nr - new_nr; i >= new_nr; i -= new_nr) { > > > > __split_huge_page_tail(folio, i, lruvec, list, new_order); > > > > /* Some pages can be beyond EOF: drop them from page cache */ > > > > if (head[i].index >= end) { > > > > folio_put(tail); > > > > __page_cache_release > > > > //2nd lock here > > > > folio_lruvec_relock_irqsave > > > > > > Why doesn't lockdep catch this? > > It is reported by a regression test of the fix patch which aims at the > > find_get_entry livelock issue as below. I don't know the details of > > the kernel configuration. > > > > https://lore.kernel.org/linux-mm/5f989315-e380-46aa-80d1-ce8608889e5f@marcinwanat.pl/ > > Go away. ok, you are the boss anyway. But this series of call chain does have the risk of deadlock, right? Besides, the livelock issue which is caused by zero ref-count folio within find_get_entry is kept being reported by different users. https://lore.kernel.org/linux-mm/CALOAHbC8NM7R-pKvPW6m4fnn_8BQZuPjJrNZaEN=sg67Gp+NGQ@mail.gmail.com/
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9859aa4f7553..ea504df46d3b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2925,7 +2925,9 @@ static void __split_huge_page(struct page *page, struct list_head *list, folio_account_cleaned(tail, inode_to_wb(folio->mapping->host)); __filemap_remove_folio(tail, NULL); + unlock_page_lruvec(lruvec); folio_put(tail); + folio_lruvec_lock(folio); } else if (!PageAnon(page)) { __xa_store(&folio->mapping->i_pages, head[i].index, head + i, 0);