Message ID | 20220608141432.23258-1-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2] mm/vmscan: don't try to reclaim freed folios | expand |
On Wed, Jun 08, 2022 at 10:14:32PM +0800, Miaohe Lin wrote: > If folios were freed from under us, there's no need to reclaim them. Skip > these folios to save lots of cpu cycles and avoid possible unnecessary > disk I/O. Yes, but I asked how often this happened, and you said you didn't know. Do you have any data? I'm reluctant to make a function which is over 400 LOC already any longer. > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > v2: > use folio_ref_freeze to guard against race with GUP (fast). Many thanks > Matthew for pointing this out. > --- > mm/vmscan.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 13d34d9593bb..547ae7ae6ab1 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1610,13 +1610,19 @@ static unsigned int shrink_page_list(struct list_head *page_list, > folio = lru_to_folio(page_list); > list_del(&folio->lru); > > + nr_pages = folio_nr_pages(folio); > + > + if (folio_ref_count(folio) == 1 && > + folio_ref_freeze(folio, 1)) { > + /* folio was freed from under us. So we are done. */ > + goto free_it; > + } > + > if (!folio_trylock(folio)) > goto keep; > > VM_BUG_ON_FOLIO(folio_test_active(folio), folio); > > - nr_pages = folio_nr_pages(folio); > - > /* Account the number of base pages */ > sc->nr_scanned += nr_pages; > > -- > 2.23.0 >
On 2022/6/9 2:47, Matthew Wilcox wrote: > On Wed, Jun 08, 2022 at 10:14:32PM +0800, Miaohe Lin wrote: >> If folios were freed from under us, there's no need to reclaim them. Skip >> these folios to save lots of cpu cycles and avoid possible unnecessary >> disk I/O. > > Yes, but I asked how often this happened, and you said you didn't know. > Do you have any data? I'm reluctant to make a function which is over This is just like the page_count == 1 case when doing page migration in unmap_and_move(). > 400 LOC already any longer. I'm fine to resend this patch until your work is done (It will be really grateful if I will be notified when that work is done). Thanks! > >> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> >> --- >> v2: >> use folio_ref_freeze to guard against race with GUP (fast). Many thanks >> Matthew for pointing this out. >> --- >> mm/vmscan.c | 10 ++++++++-- >> 1 file changed, 8 insertions(+), 2 deletions(-) >> >> diff --git a/mm/vmscan.c b/mm/vmscan.c >> index 13d34d9593bb..547ae7ae6ab1 100644 >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -1610,13 +1610,19 @@ static unsigned int shrink_page_list(struct list_head *page_list, >> folio = lru_to_folio(page_list); >> list_del(&folio->lru); >> >> + nr_pages = folio_nr_pages(folio); >> + >> + if (folio_ref_count(folio) == 1 && >> + folio_ref_freeze(folio, 1)) { >> + /* folio was freed from under us. So we are done. */ >> + goto free_it; >> + } >> + >> if (!folio_trylock(folio)) >> goto keep; >> >> VM_BUG_ON_FOLIO(folio_test_active(folio), folio); >> >> - nr_pages = folio_nr_pages(folio); >> - >> /* Account the number of base pages */ >> sc->nr_scanned += nr_pages; >> >> -- >> 2.23.0 >> > > . >
diff --git a/mm/vmscan.c b/mm/vmscan.c index 13d34d9593bb..547ae7ae6ab1 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1610,13 +1610,19 @@ static unsigned int shrink_page_list(struct list_head *page_list, folio = lru_to_folio(page_list); list_del(&folio->lru); + nr_pages = folio_nr_pages(folio); + + if (folio_ref_count(folio) == 1 && + folio_ref_freeze(folio, 1)) { + /* folio was freed from under us. So we are done. */ + goto free_it; + } + if (!folio_trylock(folio)) goto keep; VM_BUG_ON_FOLIO(folio_test_active(folio), folio); - nr_pages = folio_nr_pages(folio); - /* Account the number of base pages */ sc->nr_scanned += nr_pages;
If folios were freed from under us, there's no need to reclaim them. Skip these folios to save lots of cpu cycles and avoid possible unnecessary disk I/O. Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> --- v2: use folio_ref_freeze to guard against race with GUP (fast). Many thanks Matthew for pointing this out. --- mm/vmscan.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)