Message ID | 20220329010901.1654-2-richard.weiyang@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2,1/2] mm/vmscan: reclaim only affects managed_zones | expand |
Wei Yang <richard.weiyang@gmail.com> writes: > wakeup_kswapd() only wake up kswapd when the zone is managed. > > For two callers of wakeup_kswapd(), they are node perspective. > > * wake_all_kswapds > * numamigrate_isolate_page > > If we picked up a !managed zone, this is not we expected. > > This patch makes sure we pick up a managed zone for wakeup_kswapd(). And > it also use managed_zone in migrate_balanced_pgdat() to get the proper > zone. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Cc: Miaohe Lin <linmiaohe@huawei.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: "Huang, Ying" <ying.huang@intel.com> > Cc: Mel Gorman <mgorman@techsingularity.net> > Cc: Oscar Salvador <osalvador@suse.de> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> LGTM, Thanks! Reviewed-by: "Huang, Ying" <ying.huang@intel.com> > > --- > v2: adjust the usage in migrate_balanced_pgdat() > > --- > mm/migrate.c | 6 +++--- > mm/page_alloc.c | 2 ++ > 2 files changed, 5 insertions(+), 3 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index 3d60823afd2d..5adc55b5347c 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1971,7 +1971,7 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages, > #ifdef CONFIG_NUMA_BALANCING > /* > * Returns true if this is a safe migration target node for misplaced NUMA > - * pages. Currently it only checks the watermarks which crude > + * pages. Currently it only checks the watermarks which is crude. > */ > static bool migrate_balanced_pgdat(struct pglist_data *pgdat, > unsigned long nr_migrate_pages) > @@ -1981,7 +1981,7 @@ static bool migrate_balanced_pgdat(struct pglist_data *pgdat, > for (z = pgdat->nr_zones - 1; z >= 0; z--) { > struct zone *zone = pgdat->node_zones + z; > > - if (!populated_zone(zone)) > + if (!managed_zone(zone)) > continue; > > /* Avoid waking kswapd by allocating pages_to_migrate pages. */ > @@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) > if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING)) > return 0; > for (z = pgdat->nr_zones - 1; z >= 0; z--) { > - if (populated_zone(pgdat->node_zones + z)) > + if (managed_zone(pgdat->node_zones + z)) > break; > } > wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE); > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 4c0c4ef94ba0..6656c2d06e01 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4674,6 +4674,8 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask, > > for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, highest_zoneidx, > ac->nodemask) { > + if (!managed_zone(zone)) > + continue; > if (last_pgdat != zone->zone_pgdat) > wakeup_kswapd(zone, gfp_mask, order, highest_zoneidx); > last_pgdat = zone->zone_pgdat;
On 2022/3/29 9:09, Wei Yang wrote: > wakeup_kswapd() only wake up kswapd when the zone is managed. > > For two callers of wakeup_kswapd(), they are node perspective. > > * wake_all_kswapds > * numamigrate_isolate_page > > If we picked up a !managed zone, this is not we expected. > > This patch makes sure we pick up a managed zone for wakeup_kswapd(). And > it also use managed_zone in migrate_balanced_pgdat() to get the proper > zone. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Looks good to me. Thanks! Reviewed-by: Miaohe Lin <linmiaohe@huawei.com> > Cc: Miaohe Lin <linmiaohe@huawei.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: "Huang, Ying" <ying.huang@intel.com> > Cc: Mel Gorman <mgorman@techsingularity.net> > Cc: Oscar Salvador <osalvador@suse.de> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > > --- > v2: adjust the usage in migrate_balanced_pgdat() > > --- > mm/migrate.c | 6 +++--- > mm/page_alloc.c | 2 ++ > 2 files changed, 5 insertions(+), 3 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index 3d60823afd2d..5adc55b5347c 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1971,7 +1971,7 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages, > #ifdef CONFIG_NUMA_BALANCING > /* > * Returns true if this is a safe migration target node for misplaced NUMA > - * pages. Currently it only checks the watermarks which crude > + * pages. Currently it only checks the watermarks which is crude. > */ > static bool migrate_balanced_pgdat(struct pglist_data *pgdat, > unsigned long nr_migrate_pages) > @@ -1981,7 +1981,7 @@ static bool migrate_balanced_pgdat(struct pglist_data *pgdat, > for (z = pgdat->nr_zones - 1; z >= 0; z--) { > struct zone *zone = pgdat->node_zones + z; > > - if (!populated_zone(zone)) > + if (!managed_zone(zone)) > continue; > > /* Avoid waking kswapd by allocating pages_to_migrate pages. */ > @@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) > if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING)) > return 0; > for (z = pgdat->nr_zones - 1; z >= 0; z--) { > - if (populated_zone(pgdat->node_zones + z)) > + if (managed_zone(pgdat->node_zones + z)) > break; > } > wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE); > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 4c0c4ef94ba0..6656c2d06e01 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4674,6 +4674,8 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask, > > for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, highest_zoneidx, > ac->nodemask) { > + if (!managed_zone(zone)) > + continue; > if (last_pgdat != zone->zone_pgdat) > wakeup_kswapd(zone, gfp_mask, order, highest_zoneidx); > last_pgdat = zone->zone_pgdat; >
On 29.03.22 03:09, Wei Yang wrote: > wakeup_kswapd() only wake up kswapd when the zone is managed. > > For two callers of wakeup_kswapd(), they are node perspective. > > * wake_all_kswapds > * numamigrate_isolate_page > > If we picked up a !managed zone, this is not we expected. > > This patch makes sure we pick up a managed zone for wakeup_kswapd(). And > it also use managed_zone in migrate_balanced_pgdat() to get the proper > zone. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Cc: Miaohe Lin <linmiaohe@huawei.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: "Huang, Ying" <ying.huang@intel.com> > Cc: Mel Gorman <mgorman@techsingularity.net> > Cc: Oscar Salvador <osalvador@suse.de> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> ^ I'm not so sure about that SOB, actually Andrew should add that. But maybe there is good reason for it that I'm not aware of. Reviewed-by: David Hildenbrand <david@redhat.com>
On Wed, Mar 30, 2022 at 09:39:42AM +0200, David Hildenbrand wrote: >On 29.03.22 03:09, Wei Yang wrote: >> wakeup_kswapd() only wake up kswapd when the zone is managed. >> >> For two callers of wakeup_kswapd(), they are node perspective. >> >> * wake_all_kswapds >> * numamigrate_isolate_page >> >> If we picked up a !managed zone, this is not we expected. >> >> This patch makes sure we pick up a managed zone for wakeup_kswapd(). And >> it also use managed_zone in migrate_balanced_pgdat() to get the proper >> zone. >> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >> Cc: Miaohe Lin <linmiaohe@huawei.com> >> Cc: David Hildenbrand <david@redhat.com> >> Cc: "Huang, Ying" <ying.huang@intel.com> >> Cc: Mel Gorman <mgorman@techsingularity.net> >> Cc: Oscar Salvador <osalvador@suse.de> >> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > >^ I'm not so sure about that SOB, actually Andrew should add that. But >maybe there is good reason for it that I'm not aware of. > I see Andrew has added this for v1. Maybe I should remove this since v2 has some minor adjustment to v1. :-) > >Reviewed-by: David Hildenbrand <david@redhat.com> > >-- >Thanks, > >David / dhildenb
diff --git a/mm/migrate.c b/mm/migrate.c index 3d60823afd2d..5adc55b5347c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1971,7 +1971,7 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages, #ifdef CONFIG_NUMA_BALANCING /* * Returns true if this is a safe migration target node for misplaced NUMA - * pages. Currently it only checks the watermarks which crude + * pages. Currently it only checks the watermarks which is crude. */ static bool migrate_balanced_pgdat(struct pglist_data *pgdat, unsigned long nr_migrate_pages) @@ -1981,7 +1981,7 @@ static bool migrate_balanced_pgdat(struct pglist_data *pgdat, for (z = pgdat->nr_zones - 1; z >= 0; z--) { struct zone *zone = pgdat->node_zones + z; - if (!populated_zone(zone)) + if (!managed_zone(zone)) continue; /* Avoid waking kswapd by allocating pages_to_migrate pages. */ @@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING)) return 0; for (z = pgdat->nr_zones - 1; z >= 0; z--) { - if (populated_zone(pgdat->node_zones + z)) + if (managed_zone(pgdat->node_zones + z)) break; } wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4c0c4ef94ba0..6656c2d06e01 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4674,6 +4674,8 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask, for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, highest_zoneidx, ac->nodemask) { + if (!managed_zone(zone)) + continue; if (last_pgdat != zone->zone_pgdat) wakeup_kswapd(zone, gfp_mask, order, highest_zoneidx); last_pgdat = zone->zone_pgdat;