Message ID | 20201005121534.15649-4-david@redhat.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 293ffa5ebb9c08a77d8de458166c31b4d7b0cd65 |
Headers | show |
Series | mm: place pages to the freelist tail when onlining and undoing isolation | expand |
On Mon 05-10-20 14:15:32, David Hildenbrand wrote: > Whenever we move pages between freelists via move_to_free_list()/ > move_freepages_block(), we don't actually touch the pages: > 1. Page isolation doesn't actually touch the pages, it simply isolates > pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist. > When undoing isolation, we move the pages back to the target list. > 2. Page stealing (steal_suitable_fallback()) moves free pages directly > between lists without touching them. > 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock() moves > free pages directly between freelists without touching them. > > We already place pages to the tail of the freelists when undoing isolation > via __putback_isolated_page(), let's do it in any case (e.g., if order <= > pageblock_order) and document the behavior. To simplify, let's move the > pages to the tail for all move_to_free_list()/move_freepages_block() users. > > In 2., the target list is empty, so there should be no change. In 3., > we might observe a change, however, highatomic is more concerned about > allocations succeeding than cache hotness - if we ever realize this > change degrades a workload, we can special-case this instance and add a > proper comment. > > This change results in all pages getting onlined via online_pages() to > be placed to the tail of the freelist. > > Reviewed-by: Oscar Salvador <osalvador@suse.de> > Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com> > Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com> > Cc: Mel Gorman <mgorman@techsingularity.net> > Cc: Michal Hocko <mhocko@kernel.org> > Cc: Dave Hansen <dave.hansen@intel.com> > Cc: Vlastimil Babka <vbabka@suse.cz> > Cc: Wei Yang <richard.weiyang@linux.alibaba.com> > Cc: Oscar Salvador <osalvador@suse.de> > Cc: Mike Rapoport <rppt@kernel.org> > Cc: Scott Cheloha <cheloha@linux.ibm.com> > Cc: Michael Ellerman <mpe@ellerman.id.au> > Signed-off-by: David Hildenbrand <david@redhat.com> Much simpler! Acked-by: Michal Hocko <mhocko@suse.com> Thanks! > --- > mm/page_alloc.c | 10 +++++++--- > mm/page_isolation.c | 5 +++++ > 2 files changed, 12 insertions(+), 3 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index df5ff0cd6df1..b187e46cf640 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -901,13 +901,17 @@ static inline void add_to_free_list_tail(struct page *page, struct zone *zone, > area->nr_free++; > } > > -/* Used for pages which are on another list */ > +/* > + * Used for pages which are on another list. Move the pages to the tail > + * of the list - so the moved pages won't immediately be considered for > + * allocation again (e.g., optimization for memory onlining). > + */ > static inline void move_to_free_list(struct page *page, struct zone *zone, > unsigned int order, int migratetype) > { > struct free_area *area = &zone->free_area[order]; > > - list_move(&page->lru, &area->free_list[migratetype]); > + list_move_tail(&page->lru, &area->free_list[migratetype]); > } > > static inline void del_page_from_free_list(struct page *page, struct zone *zone, > @@ -2340,7 +2344,7 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone, > #endif > > /* > - * Move the free pages in a range to the free lists of the requested type. > + * Move the free pages in a range to the freelist tail of the requested type. > * Note that start_page and end_pages are not aligned on a pageblock > * boundary. If alignment is required, use move_freepages_block() > */ > diff --git a/mm/page_isolation.c b/mm/page_isolation.c > index abfe26ad59fd..83692b937784 100644 > --- a/mm/page_isolation.c > +++ b/mm/page_isolation.c > @@ -106,6 +106,11 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype) > * If we isolate freepage with more than pageblock_order, there > * should be no freepage in the range, so we could avoid costly > * pageblock scanning for freepage moving. > + * > + * We didn't actually touch any of the isolated pages, so place them > + * to the tail of the freelist. This is an optimization for memory > + * onlining - just onlined memory won't immediately be considered for > + * allocation. > */ > if (!isolated_page) { > nr_pages = move_freepages_block(zone, page, migratetype, NULL); > -- > 2.26.2
On 10/5/20 2:15 PM, David Hildenbrand wrote: > Whenever we move pages between freelists via move_to_free_list()/ > move_freepages_block(), we don't actually touch the pages: > 1. Page isolation doesn't actually touch the pages, it simply isolates > pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist. > When undoing isolation, we move the pages back to the target list. > 2. Page stealing (steal_suitable_fallback()) moves free pages directly > between lists without touching them. > 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock() moves > free pages directly between freelists without touching them. > > We already place pages to the tail of the freelists when undoing isolation > via __putback_isolated_page(), let's do it in any case (e.g., if order <= > pageblock_order) and document the behavior. To simplify, let's move the > pages to the tail for all move_to_free_list()/move_freepages_block() users. > > In 2., the target list is empty, so there should be no change. In 3., > we might observe a change, however, highatomic is more concerned about > allocations succeeding than cache hotness - if we ever realize this > change degrades a workload, we can special-case this instance and add a > proper comment. > > This change results in all pages getting onlined via online_pages() to > be placed to the tail of the freelist. > > Reviewed-by: Oscar Salvador <osalvador@suse.de> > Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com> > Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index df5ff0cd6df1..b187e46cf640 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -901,13 +901,17 @@ static inline void add_to_free_list_tail(struct page *page, struct zone *zone, area->nr_free++; } -/* Used for pages which are on another list */ +/* + * Used for pages which are on another list. Move the pages to the tail + * of the list - so the moved pages won't immediately be considered for + * allocation again (e.g., optimization for memory onlining). + */ static inline void move_to_free_list(struct page *page, struct zone *zone, unsigned int order, int migratetype) { struct free_area *area = &zone->free_area[order]; - list_move(&page->lru, &area->free_list[migratetype]); + list_move_tail(&page->lru, &area->free_list[migratetype]); } static inline void del_page_from_free_list(struct page *page, struct zone *zone, @@ -2340,7 +2344,7 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone, #endif /* - * Move the free pages in a range to the free lists of the requested type. + * Move the free pages in a range to the freelist tail of the requested type. * Note that start_page and end_pages are not aligned on a pageblock * boundary. If alignment is required, use move_freepages_block() */ diff --git a/mm/page_isolation.c b/mm/page_isolation.c index abfe26ad59fd..83692b937784 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -106,6 +106,11 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype) * If we isolate freepage with more than pageblock_order, there * should be no freepage in the range, so we could avoid costly * pageblock scanning for freepage moving. + * + * We didn't actually touch any of the isolated pages, so place them + * to the tail of the freelist. This is an optimization for memory + * onlining - just onlined memory won't immediately be considered for + * allocation. */ if (!isolated_page) { nr_pages = move_freepages_block(zone, page, migratetype, NULL);