Message ID | 20200918030051.650890-4-yuzhao@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: clean up some lru related pieces | expand |
On Thu 17-09-20 21:00:41, Yu Zhao wrote: > Now we have a total of three places that free lru pages when their > references become zero (after we drop the reference from isolation). > > Before this patch, they all do: > __ClearPageLRU() > page_off_lru() > del_page_from_lru_list() > > After this patch, they become: > page_off_lru() > __ClearPageLRU() > del_page_from_lru_list() > > This change should have no side effects. Again, why this is desirable? > Signed-off-by: Yu Zhao <yuzhao@google.com> > --- > include/linux/mm_inline.h | 1 + > mm/swap.c | 2 -- > mm/vmscan.c | 1 - > 3 files changed, 1 insertion(+), 3 deletions(-) > > diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h > index 8fc71e9d7bb0..be9418425e41 100644 > --- a/include/linux/mm_inline.h > +++ b/include/linux/mm_inline.h > @@ -92,6 +92,7 @@ static __always_inline enum lru_list page_off_lru(struct page *page) > { > enum lru_list lru; > > + __ClearPageLRU(page); > if (PageUnevictable(page)) { > __ClearPageUnevictable(page); > lru = LRU_UNEVICTABLE; > diff --git a/mm/swap.c b/mm/swap.c > index 40bf20a75278..8362083f00c9 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -86,7 +86,6 @@ static void __page_cache_release(struct page *page) > spin_lock_irqsave(&pgdat->lru_lock, flags); > lruvec = mem_cgroup_page_lruvec(page, pgdat); > VM_BUG_ON_PAGE(!PageLRU(page), page); > - __ClearPageLRU(page); > del_page_from_lru_list(page, lruvec, page_off_lru(page)); > spin_unlock_irqrestore(&pgdat->lru_lock, flags); > } > @@ -895,7 +894,6 @@ void release_pages(struct page **pages, int nr) > > lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); > VM_BUG_ON_PAGE(!PageLRU(page), page); > - __ClearPageLRU(page); > del_page_from_lru_list(page, lruvec, page_off_lru(page)); > } > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index f257d2f61574..f9a186a96410 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1862,7 +1862,6 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, > add_page_to_lru_list(page, lruvec, lru); > > if (put_page_testzero(page)) { > - __ClearPageLRU(page); > del_page_from_lru_list(page, lruvec, page_off_lru(page)); > > if (unlikely(PageCompound(page))) { > -- > 2.28.0.681.g6f77f65b4e-goog
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 8fc71e9d7bb0..be9418425e41 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -92,6 +92,7 @@ static __always_inline enum lru_list page_off_lru(struct page *page) { enum lru_list lru; + __ClearPageLRU(page); if (PageUnevictable(page)) { __ClearPageUnevictable(page); lru = LRU_UNEVICTABLE; diff --git a/mm/swap.c b/mm/swap.c index 40bf20a75278..8362083f00c9 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -86,7 +86,6 @@ static void __page_cache_release(struct page *page) spin_lock_irqsave(&pgdat->lru_lock, flags); lruvec = mem_cgroup_page_lruvec(page, pgdat); VM_BUG_ON_PAGE(!PageLRU(page), page); - __ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_off_lru(page)); spin_unlock_irqrestore(&pgdat->lru_lock, flags); } @@ -895,7 +894,6 @@ void release_pages(struct page **pages, int nr) lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); VM_BUG_ON_PAGE(!PageLRU(page), page); - __ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_off_lru(page)); } diff --git a/mm/vmscan.c b/mm/vmscan.c index f257d2f61574..f9a186a96410 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1862,7 +1862,6 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, add_page_to_lru_list(page, lruvec, lru); if (put_page_testzero(page)) { - __ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_off_lru(page)); if (unlikely(PageCompound(page))) {
Now we have a total of three places that free lru pages when their references become zero (after we drop the reference from isolation). Before this patch, they all do: __ClearPageLRU() page_off_lru() del_page_from_lru_list() After this patch, they become: page_off_lru() __ClearPageLRU() del_page_from_lru_list() This change should have no side effects. Signed-off-by: Yu Zhao <yuzhao@google.com> --- include/linux/mm_inline.h | 1 + mm/swap.c | 2 -- mm/vmscan.c | 1 - 3 files changed, 1 insertion(+), 3 deletions(-)