Message ID | 20200831175042.3527153-1-yuzhao@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2,1/2] mm: use add_page_to_lru_list()/page_lru()/page_off_lru() | expand |
On Mon 31-08-20 11:50:41, Yu Zhao wrote: [...] > @@ -1860,16 +1859,11 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, > lruvec = mem_cgroup_page_lruvec(page, pgdat); > > SetPageLRU(page); > - lru = page_lru(page); > - > - nr_pages = thp_nr_pages(page); > - update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); > - list_move(&page->lru, &lruvec->lists[lru]); > + add_page_to_lru_list(page, lruvec, page_lru(page)); > > if (put_page_testzero(page)) { > __ClearPageLRU(page); > - __ClearPageActive(page); This should go into its own patch. The rest is a mechanical and clear. This one needs some head scratching. E.g. Is it correct for all compound pages. I believe it should be but few words on why would be better. > - del_page_from_lru_list(page, lruvec, lru); > + del_page_from_lru_list(page, lruvec, page_off_lru(page)); > > if (unlikely(PageCompound(page))) { > spin_unlock_irq(&pgdat->lru_lock); > @@ -1878,6 +1872,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, > } else > list_add(&page->lru, &pages_to_free); > } else { > + nr_pages = thp_nr_pages(page); > nr_moved += nr_pages; > if (PageActive(page)) > workingset_age_nonresident(lruvec, nr_pages); > -- > 2.28.0.402.g5ffc5be6b7-goog >
On Thu, Sep 03, 2020 at 10:28:32AM +0200, Michal Hocko wrote: > On Mon 31-08-20 11:50:41, Yu Zhao wrote: > [...] > > @@ -1860,16 +1859,11 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, > > lruvec = mem_cgroup_page_lruvec(page, pgdat); > > > > SetPageLRU(page); > > - lru = page_lru(page); > > - > > - nr_pages = thp_nr_pages(page); > > - update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); > > - list_move(&page->lru, &lruvec->lists[lru]); > > + add_page_to_lru_list(page, lruvec, page_lru(page)); > > > > if (put_page_testzero(page)) { > > __ClearPageLRU(page); > > - __ClearPageActive(page); > > This should go into its own patch. The rest is a mechanical and clear. Thanks for reviewing. I assume you are worrying about PG_unevictable being set on the page because page_off_lru() checks it first. Well, if this can happen, wouldn't it have been triggering bad_page()? PG_unevictable is in PAGE_FLAGS_CHECK_AT_FREE. > This one needs some head scratching. E.g. Is it correct for all compound > pages. I believe it should be but few words on why would be better. It seems nothing special here when compared with other places that follow the same pattern. So I'm not sure what the concern is. > > - del_page_from_lru_list(page, lruvec, lru); > > + del_page_from_lru_list(page, lruvec, page_off_lru(page)); > > > > if (unlikely(PageCompound(page))) { > > spin_unlock_irq(&pgdat->lru_lock); > > @@ -1878,6 +1872,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, > > } else > > list_add(&page->lru, &pages_to_free); > > } else { > > + nr_pages = thp_nr_pages(page); > > nr_moved += nr_pages; > > if (PageActive(page)) > > workingset_age_nonresident(lruvec, nr_pages); > > -- > > 2.28.0.402.g5ffc5be6b7-goog > > > > -- > Michal Hocko > SUSE Labs
On Thu 03-09-20 21:24:00, Yu Zhao wrote: > On Thu, Sep 03, 2020 at 10:28:32AM +0200, Michal Hocko wrote: > > On Mon 31-08-20 11:50:41, Yu Zhao wrote: > > [...] > > > @@ -1860,16 +1859,11 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, > > > lruvec = mem_cgroup_page_lruvec(page, pgdat); > > > > > > SetPageLRU(page); > > > - lru = page_lru(page); > > > - > > > - nr_pages = thp_nr_pages(page); > > > - update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); > > > - list_move(&page->lru, &lruvec->lists[lru]); > > > + add_page_to_lru_list(page, lruvec, page_lru(page)); > > > > > > if (put_page_testzero(page)) { > > > __ClearPageLRU(page); > > > - __ClearPageActive(page); > > > > This should go into its own patch. The rest is a mechanical and clear. > > Thanks for reviewing. > > I assume you are worrying about PG_unevictable being set on the page > because page_off_lru() checks it first. No, I was referring to __ClearPageActive. You are right that this is cleared in page_off_lru but that is hidden in a release path and e.g. compound pages are released via their destructor which for some might not involve releasing the page - e.g. hugetlb pages. This should be fine because hugetlb pages are not on LRU so as I've said this is fine but it belongs to its own patch because it is not a pure mechanical change like the rest of the patch.
On Fri, Sep 04, 2020 at 12:50:01PM +0200, Michal Hocko wrote: > On Thu 03-09-20 21:24:00, Yu Zhao wrote: > > On Thu, Sep 03, 2020 at 10:28:32AM +0200, Michal Hocko wrote: > > > On Mon 31-08-20 11:50:41, Yu Zhao wrote: > > > [...] > > > > @@ -1860,16 +1859,11 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, > > > > lruvec = mem_cgroup_page_lruvec(page, pgdat); > > > > > > > > SetPageLRU(page); > > > > - lru = page_lru(page); > > > > - > > > > - nr_pages = thp_nr_pages(page); > > > > - update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); > > > > - list_move(&page->lru, &lruvec->lists[lru]); > > > > + add_page_to_lru_list(page, lruvec, page_lru(page)); > > > > > > > > if (put_page_testzero(page)) { > > > > __ClearPageLRU(page); > > > > - __ClearPageActive(page); > > > > > > This should go into its own patch. The rest is a mechanical and clear. > > > > Thanks for reviewing. > > > > I assume you are worrying about PG_unevictable being set on the page > > because page_off_lru() checks it first. > > No, I was referring to __ClearPageActive. You are right that this is > cleared in page_off_lru but that is hidden in a release path and e.g. > compound pages are released via their destructor which for some might > not involve releasing the page - e.g. hugetlb pages. This should be fine > because hugetlb pages are not on LRU so as I've said this is fine but it > belongs to its own patch because it is not a pure mechanical change like > the rest of the patch. Please bear with me. This is the change in question: if (put_page_testzero(page)) { __ClearPageLRU(page); - __ClearPageActive(page); - del_page_from_lru_list(page, lruvec, lru); + del_page_from_lru_list(page, lruvec, page_off_lru(page)); if (unlikely(PageCompound(page))) { If the PageUnevictable() check in page_off_lru() is not a concern, I'm trying to understand what else is different between them: Before this path: After this patch: page_off_lru() __ClearPageActive() __ClearPageActive() add_page_to_lru_list() add_page_to_lru_list() And why is page_off_lru() hidden in a release path?
diff --git a/mm/swap.c b/mm/swap.c index 40bf20a75278..2735ecf0f566 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -597,11 +597,9 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, { if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { - bool active = PageActive(page); int nr_pages = thp_nr_pages(page); - del_page_from_lru_list(page, lruvec, - LRU_INACTIVE_ANON + active); + del_page_from_lru_list(page, lruvec, page_lru(page)); ClearPageActive(page); ClearPageReferenced(page); /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 99e1796eb833..1b871c3987e7 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1845,13 +1845,12 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, int nr_pages, nr_moved = 0; LIST_HEAD(pages_to_free); struct page *page; - enum lru_list lru; while (!list_empty(list)) { page = lru_to_page(list); VM_BUG_ON_PAGE(PageLRU(page), page); + list_del(&page->lru); if (unlikely(!page_evictable(page))) { - list_del(&page->lru); spin_unlock_irq(&pgdat->lru_lock); putback_lru_page(page); spin_lock_irq(&pgdat->lru_lock); @@ -1860,16 +1859,11 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, lruvec = mem_cgroup_page_lruvec(page, pgdat); SetPageLRU(page); - lru = page_lru(page); - - nr_pages = thp_nr_pages(page); - update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); - list_move(&page->lru, &lruvec->lists[lru]); + add_page_to_lru_list(page, lruvec, page_lru(page)); if (put_page_testzero(page)) { __ClearPageLRU(page); - __ClearPageActive(page); - del_page_from_lru_list(page, lruvec, lru); + del_page_from_lru_list(page, lruvec, page_off_lru(page)); if (unlikely(PageCompound(page))) { spin_unlock_irq(&pgdat->lru_lock); @@ -1878,6 +1872,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, } else list_add(&page->lru, &pages_to_free); } else { + nr_pages = thp_nr_pages(page); nr_moved += nr_pages; if (PageActive(page)) workingset_age_nonresident(lruvec, nr_pages);
This is a trivial but worth having clean-up patch. There should be no side effects except page->lru is temporarily poisoned after it's deleted but before it's added to the new list in move_pages_to_lru() (which is not a problem). [ I was under false impression that page_off_lru() clears PG_lru; Alex Shi <alex.shi@linux.alibaba.com> pointed that out. So in v2, we keep __ClearPageLRU(). ] Signed-off-by: Yu Zhao <yuzhao@google.com> --- mm/swap.c | 4 +--- mm/vmscan.c | 13 ++++--------- 2 files changed, 5 insertions(+), 12 deletions(-)