Message ID | 20240926050647.5653-1-zhaoyang.huang@unisoc.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [PATCHv2] mm: migrate LRU_REFS_MASK bits in folio_migrate_flags | expand |
On 26.09.24 07:06, zhaoyang.huang wrote: > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > Bits of LRU_REFS_MASK are not inherited during migration which lead to > new folio start from tier0 when MGLRU enabled. Try to bring as much bits > of folio->flags as possible since compaction and alloc_contig_range > which introduce migration do happen at times. > > Suggested-by: Yu Zhao <yuzhao@google.com> > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > --- > v2: modification as Yu Zhao suggested Looks reasonable to me Acked-by: David Hildenbrand <david@redhat.com>
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index f4fe593c1400..6f801c7b36e2 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -291,6 +291,12 @@ static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio, return true; } +static inline void folio_migrate_refs(struct folio *new, struct folio *old) +{ + unsigned long refs = READ_ONCE(old->flags) & LRU_REFS_MASK; + + set_mask_bits(&new->flags, LRU_REFS_MASK, refs); +} #else /* !CONFIG_LRU_GEN */ static inline bool lru_gen_enabled(void) @@ -313,6 +319,10 @@ static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio, return false; } +static inline void folio_migrate_refs(struct folio *new, struct folio *old) +{ + +} #endif /* CONFIG_LRU_GEN */ static __always_inline diff --git a/mm/migrate.c b/mm/migrate.c index 923ea80ba744..60c97e235ae7 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -618,6 +618,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) if (folio_test_idle(folio)) folio_set_idle(newfolio); + folio_migrate_refs(newfolio, folio); /* * Copy NUMA information to the new page, to prevent over-eager * future migrations of this same page.