Message ID | 20240327075516.1367097-1-zhaoyang.huang@unisoc.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: get the folio's refcnt before clear PG_lru in folio_isolate_lru | expand |
On Wed, Mar 27, 2024 at 03:55:16PM +0800, zhaoyang.huang wrote: > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > Bellowing race happens when the caller of folio_isolate_lru rely on the > refcnt of page cache. Moving folio_get ahead of folio_test_clear_lru to > make it more robust. No, as explained to you multiple times before.
diff --git a/mm/vmscan.c b/mm/vmscan.c index 3ef654addd44..42f15ca06e09 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1731,10 +1731,10 @@ bool folio_isolate_lru(struct folio *folio) VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio); + folio_get(folio); if (folio_test_clear_lru(folio)) { struct lruvec *lruvec; - folio_get(folio); lruvec = folio_lruvec_lock_irq(folio); lruvec_del_folio(lruvec, folio); unlock_page_lruvec_irq(lruvec);