Message ID | 20221208203503.20665-5-vishal.moola@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Convert deactivate_page() to folio_deactivate() | expand |
On Thu, Dec 08, 2022 at 12:35:03PM -0800, Vishal Moola (Oracle) wrote: > Deactivate_page() has already been converted to use folios, this change > converts it to take in a folio argument instead of calling page_folio(). > It also renames the function folio_deactivate() to be more consistent > with other folio functions. > > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> (for future series like this, it's slightly fewer changes to introduce folio_deactivate() first and change deactivate_page() to be a wrapper. Then patches 2 & 3 in this series can just be converted straight to folio_deactivate() instead of being changed twice. wouldn't ask you to redo the patch series at this point, but next time ...)
On Thu, 8 Dec 2022 12:35:03 -0800 "Vishal Moola (Oracle)" <vishal.moola@gmail.com> wrote: > Deactivate_page() has already been converted to use folios, this change > converts it to take in a folio argument instead of calling page_folio(). > It also renames the function folio_deactivate() to be more consistent > with other folio functions. > > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> Reviewed-by: SeongJae Park <sj@kernel.org> Thanks, SJ > --- > include/linux/swap.h | 2 +- > mm/damon/paddr.c | 2 +- > mm/madvise.c | 4 ++-- > mm/swap.c | 14 ++++++-------- > 4 files changed, 10 insertions(+), 12 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index a18cf4b7c724..6427b3af30c3 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -409,7 +409,7 @@ extern void lru_add_drain(void); > extern void lru_add_drain_cpu(int cpu); > extern void lru_add_drain_cpu_zone(struct zone *zone); > extern void lru_add_drain_all(void); > -extern void deactivate_page(struct page *page); > +void folio_deactivate(struct folio *folio); > extern void mark_page_lazyfree(struct page *page); > extern void swap_setup(void); > > diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c > index 73548bc82297..6b36de1396a4 100644 > --- a/mm/damon/paddr.c > +++ b/mm/damon/paddr.c > @@ -247,7 +247,7 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate( > if (mark_accessed) > folio_mark_accessed(folio); > else > - deactivate_page(&folio->page); > + folio_deactivate(folio); > folio_put(folio); > applied += folio_nr_pages(folio); > } > diff --git a/mm/madvise.c b/mm/madvise.c > index 2a84b5dfbb4c..1ab293019862 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -396,7 +396,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > list_add(&folio->lru, &folio_list); > } > } else > - deactivate_page(&folio->page); > + folio_deactivate(folio); > huge_unlock: > spin_unlock(ptl); > if (pageout) > @@ -485,7 +485,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > list_add(&folio->lru, &folio_list); > } > } else > - deactivate_page(&folio->page); > + folio_deactivate(folio); > } > > arch_leave_lazy_mmu_mode(); > diff --git a/mm/swap.c b/mm/swap.c > index 955930f41d20..9cc8215acdbb 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -720,17 +720,15 @@ void deactivate_file_folio(struct folio *folio) > } > > /* > - * deactivate_page - deactivate a page > - * @page: page to deactivate > + * folio_deactivate - deactivate a folio > + * @folio: folio to deactivate > * > - * deactivate_page() moves @page to the inactive list if @page was on the active > - * list and was not an unevictable page. This is done to accelerate the reclaim > - * of @page. > + * folio_deactivate() moves @folio to the inactive list if @folio was on the > + * active list and was not unevictable. This is done to accelerate the > + * reclaim of @folio. > */ > -void deactivate_page(struct page *page) > +void folio_deactivate(struct folio *folio) > { > - struct folio *folio = page_folio(page); > - > if (folio_test_lru(folio) && !folio_test_unevictable(folio) && > (folio_test_active(folio) || lru_gen_enabled())) { > struct folio_batch *fbatch; > -- > 2.38.1 >
diff --git a/include/linux/swap.h b/include/linux/swap.h index a18cf4b7c724..6427b3af30c3 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -409,7 +409,7 @@ extern void lru_add_drain(void); extern void lru_add_drain_cpu(int cpu); extern void lru_add_drain_cpu_zone(struct zone *zone); extern void lru_add_drain_all(void); -extern void deactivate_page(struct page *page); +void folio_deactivate(struct folio *folio); extern void mark_page_lazyfree(struct page *page); extern void swap_setup(void); diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index 73548bc82297..6b36de1396a4 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -247,7 +247,7 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate( if (mark_accessed) folio_mark_accessed(folio); else - deactivate_page(&folio->page); + folio_deactivate(folio); folio_put(folio); applied += folio_nr_pages(folio); } diff --git a/mm/madvise.c b/mm/madvise.c index 2a84b5dfbb4c..1ab293019862 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -396,7 +396,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, list_add(&folio->lru, &folio_list); } } else - deactivate_page(&folio->page); + folio_deactivate(folio); huge_unlock: spin_unlock(ptl); if (pageout) @@ -485,7 +485,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, list_add(&folio->lru, &folio_list); } } else - deactivate_page(&folio->page); + folio_deactivate(folio); } arch_leave_lazy_mmu_mode(); diff --git a/mm/swap.c b/mm/swap.c index 955930f41d20..9cc8215acdbb 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -720,17 +720,15 @@ void deactivate_file_folio(struct folio *folio) } /* - * deactivate_page - deactivate a page - * @page: page to deactivate + * folio_deactivate - deactivate a folio + * @folio: folio to deactivate * - * deactivate_page() moves @page to the inactive list if @page was on the active - * list and was not an unevictable page. This is done to accelerate the reclaim - * of @page. + * folio_deactivate() moves @folio to the inactive list if @folio was on the + * active list and was not unevictable. This is done to accelerate the + * reclaim of @folio. */ -void deactivate_page(struct page *page) +void folio_deactivate(struct folio *folio) { - struct folio *folio = page_folio(page); - if (folio_test_lru(folio) && !folio_test_unevictable(folio) && (folio_test_active(folio) || lru_gen_enabled())) { struct folio_batch *fbatch;
Deactivate_page() has already been converted to use folios, this change converts it to take in a folio argument instead of calling page_folio(). It also renames the function folio_deactivate() to be more consistent with other folio functions. Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> --- include/linux/swap.h | 2 +- mm/damon/paddr.c | 2 +- mm/madvise.c | 4 ++-- mm/swap.c | 14 ++++++-------- 4 files changed, 10 insertions(+), 12 deletions(-)