Message ID | 20181107101830.17405-6-mhocko@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm, memory_hotplug: improve memory offlining failures debugging | expand |
On 11/07/2018 03:48 PM, Michal Hocko wrote: > From: Michal Hocko <mhocko@suse.com> > > There is only very limited information printed when the memory offlining > fails: > [ 1984.506184] rac1 kernel: memory offlining [mem 0x82600000000-0x8267fffffff] failed due to signal backoff > > This tells us that the failure is triggered by the userspace > intervention but it doesn't tell us much more about the underlying > reason. It might be that the page migration failes repeatedly and the > userspace timeout expires and send a signal or it might be some of the > earlier steps (isolation, memory notifier) takes too long. > > If the migration failes then it would be really helpful to see which > page that and its state. The same applies to the isolation phase. If we > fail to isolate a page from the allocator then knowing the state of the > page would be helpful as well. > > Dump the page state that fails to get isolated or migrated. This will > tell us more about the failure and what to focus on during debugging. > > Signed-off-by: Michal Hocko <mhocko@suse.com> > --- > mm/memory_hotplug.c | 12 ++++++++---- > mm/page_alloc.c | 1 + > 2 files changed, 9 insertions(+), 4 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 1badac89c58e..bf214beccda3 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -1388,10 +1388,8 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > page_is_file_cache(page)); > > } else { > -#ifdef CONFIG_DEBUG_VM > - pr_alert("failed to isolate pfn %lx\n", pfn); > + pr_warn("failed to isolate pfn %lx\n", pfn)> dump_page(page, "isolation failed"); > -#endif > put_page(page); > /* Because we don't have big zone->lock. we should > check this again here. */ > @@ -1411,8 +1409,14 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > /* Allocate a new page from the nearest neighbor node */ > ret = migrate_pages(&source, new_node_page, NULL, 0, > MIGRATE_SYNC, MR_MEMORY_HOTPLUG); > - if (ret) > + if (ret) { > + list_for_each_entry(page, &source, lru) { > + pr_warn("migrating pfn %lx failed ", > + page_to_pfn(page), ret); Seems like pr_warn() needs to have %d in here to print 'ret'. Though dumping return code from migrate_pages() makes sense, wondering if it is required for each and every page which failed to migrate here or just one instance is enough. > + dump_page(page, NULL); > + } s/NULL/failed to migrate/ for dump_page(). > putback_movable_pages(&source); > + } > } > out: > return ret; > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index a919ba5cb3c8..23267767bf98 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -7845,6 +7845,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, > return false; > unmovable: > WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); > + dump_page(pfn_to_page(pfn+iter), "has_unmovable_pages"); s/has_unmovable_pages/is unmovable/ If we eally care about the function name, then dump_page() should be followed by dump_stack() like the case in some other instances. > return true; This will be dumped from HugeTLB and CMA allocation paths as well through alloc_contig_range(). But it should be okay as those occurrences should be rare and dumping page state then will also help.
On Thu 08-11-18 12:46:47, Anshuman Khandual wrote: > > > On 11/07/2018 03:48 PM, Michal Hocko wrote: [...] > > @@ -1411,8 +1409,14 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > > /* Allocate a new page from the nearest neighbor node */ > > ret = migrate_pages(&source, new_node_page, NULL, 0, > > MIGRATE_SYNC, MR_MEMORY_HOTPLUG); > > - if (ret) > > + if (ret) { > > + list_for_each_entry(page, &source, lru) { > > + pr_warn("migrating pfn %lx failed ", > > + page_to_pfn(page), ret); > > Seems like pr_warn() needs to have %d in here to print 'ret'. Dohh. Rebase hickup. You are right ret:%d got lost on the way. > Though > dumping return code from migrate_pages() makes sense, wondering if > it is required for each and every page which failed to migrate here > or just one instance is enough. Does it matter enough to special case one printk? > > + dump_page(page, NULL); > > + } > > s/NULL/failed to migrate/ for dump_page(). Yes, makes sense. > > > putback_movable_pages(&source); > > + } > > } > > out: > > return ret; > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index a919ba5cb3c8..23267767bf98 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -7845,6 +7845,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, > > return false; > > unmovable: > > WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); > > + dump_page(pfn_to_page(pfn+iter), "has_unmovable_pages"); > > s/has_unmovable_pages/is unmovable/ OK > If we eally care about the function name, then dump_page() should be > followed by dump_stack() like the case in some other instances. > > > return true; > > This will be dumped from HugeTLB and CMA allocation paths as well through > alloc_contig_range(). But it should be okay as those occurrences should be > rare and dumping page state then will also help. yes Thanks and here is the incremental fix: diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index bf214beccda3..820397e18e59 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1411,9 +1411,9 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) MIGRATE_SYNC, MR_MEMORY_HOTPLUG); if (ret) { list_for_each_entry(page, &source, lru) { - pr_warn("migrating pfn %lx failed ", + pr_warn("migrating pfn %lx failed ret:%d ", page_to_pfn(page), ret); - dump_page(page, NULL); + dump_page(page, "migration failure"); } putback_movable_pages(&source); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 23267767bf98..ec2c7916dc2d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7845,7 +7845,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, return false; unmovable: WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); - dump_page(pfn_to_page(pfn+iter), "has_unmovable_pages"); + dump_page(pfn_to_page(pfn+iter), "unmovable page"); return true; }
On 11/08/2018 01:42 PM, Michal Hocko wrote: > On Thu 08-11-18 12:46:47, Anshuman Khandual wrote: >> >> >> On 11/07/2018 03:48 PM, Michal Hocko wrote: > [...] >>> @@ -1411,8 +1409,14 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) >>> /* Allocate a new page from the nearest neighbor node */ >>> ret = migrate_pages(&source, new_node_page, NULL, 0, >>> MIGRATE_SYNC, MR_MEMORY_HOTPLUG); >>> - if (ret) >>> + if (ret) { >>> + list_for_each_entry(page, &source, lru) { >>> + pr_warn("migrating pfn %lx failed ", >>> + page_to_pfn(page), ret); >> >> Seems like pr_warn() needs to have %d in here to print 'ret'. > > Dohh. Rebase hickup. You are right ret:%d got lost on the way. > >> Though >> dumping return code from migrate_pages() makes sense, wondering if >> it is required for each and every page which failed to migrate here >> or just one instance is enough. > > Does it matter enough to special case one printk? I just imagined how a bunch of prints will look like when multiple pages failed to migrate probably for the same reason. But I guess its okay. > >>> + dump_page(page, NULL); >>> + } >> >> s/NULL/failed to migrate/ for dump_page(). > > Yes, makes sense. > >> >>> putback_movable_pages(&source); >>> + } >>> } >>> out: >>> return ret; >>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >>> index a919ba5cb3c8..23267767bf98 100644 >>> --- a/mm/page_alloc.c >>> +++ b/mm/page_alloc.c >>> @@ -7845,6 +7845,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, >>> return false; >>> unmovable: >>> WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); >>> + dump_page(pfn_to_page(pfn+iter), "has_unmovable_pages"); >> >> s/has_unmovable_pages/is unmovable/ > > OK > >> If we eally care about the function name, then dump_page() should be >> followed by dump_stack() like the case in some other instances. >> >>> return true; >> >> This will be dumped from HugeTLB and CMA allocation paths as well through >> alloc_contig_range(). But it should be okay as those occurrences should be >> rare and dumping page state then will also help. > > yes > > Thanks and here is the incremental fix: > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index bf214beccda3..820397e18e59 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -1411,9 +1411,9 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > MIGRATE_SYNC, MR_MEMORY_HOTPLUG); > if (ret) { > list_for_each_entry(page, &source, lru) { > - pr_warn("migrating pfn %lx failed ", > + pr_warn("migrating pfn %lx failed ret:%d ", > page_to_pfn(page), ret); > - dump_page(page, NULL); > + dump_page(page, "migration failure"); > } > putback_movable_pages(&source); > } > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 23267767bf98..ec2c7916dc2d 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -7845,7 +7845,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, > return false; > unmovable: > WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); > - dump_page(pfn_to_page(pfn+iter), "has_unmovable_pages"); > + dump_page(pfn_to_page(pfn+iter), "unmovable page"); > return true; > } It looks good.
Andrew, could you pick up this one as well please? Let me know if you prefer me to send the whole pile with all the fixes again. > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index bf214beccda3..820397e18e59 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -1411,9 +1411,9 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > MIGRATE_SYNC, MR_MEMORY_HOTPLUG); > if (ret) { > list_for_each_entry(page, &source, lru) { > - pr_warn("migrating pfn %lx failed ", > + pr_warn("migrating pfn %lx failed ret:%d ", > page_to_pfn(page), ret); > - dump_page(page, NULL); > + dump_page(page, "migration failure"); > } > putback_movable_pages(&source); > } > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 23267767bf98..ec2c7916dc2d 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -7845,7 +7845,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, > return false; > unmovable: > WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); > - dump_page(pfn_to_page(pfn+iter), "has_unmovable_pages"); > + dump_page(pfn_to_page(pfn+iter), "unmovable page"); > return true; > } > > -- > Michal Hocko > SUSE Labs
On Wed, 7 Nov 2018 11:18:30 +0100 Michal Hocko <mhocko@kernel.org> wrote: > From: Michal Hocko <mhocko@suse.com> > > There is only very limited information printed when the memory offlining > fails: > [ 1984.506184] rac1 kernel: memory offlining [mem 0x82600000000-0x8267fffffff] failed due to signal backoff > > This tells us that the failure is triggered by the userspace > intervention but it doesn't tell us much more about the underlying > reason. It might be that the page migration failes repeatedly and the > userspace timeout expires and send a signal or it might be some of the > earlier steps (isolation, memory notifier) takes too long. > > If the migration failes then it would be really helpful to see which > page that and its state. The same applies to the isolation phase. If we > fail to isolate a page from the allocator then knowing the state of the > page would be helpful as well. > > Dump the page state that fails to get isolated or migrated. This will > tell us more about the failure and what to focus on during debugging. > > ... > > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -1388,10 +1388,8 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > page_is_file_cache(page)); > > } else { > -#ifdef CONFIG_DEBUG_VM > - pr_alert("failed to isolate pfn %lx\n", pfn); > + pr_warn("failed to isolate pfn %lx\n", pfn); > dump_page(page, "isolation failed"); > -#endif > put_page(page); > /* Because we don't have big zone->lock. we should > check this again here. */ > @@ -1411,8 +1409,14 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > /* Allocate a new page from the nearest neighbor node */ > ret = migrate_pages(&source, new_node_page, NULL, 0, > MIGRATE_SYNC, MR_MEMORY_HOTPLUG); > - if (ret) > + if (ret) { > + list_for_each_entry(page, &source, lru) { > + pr_warn("migrating pfn %lx failed ", > + page_to_pfn(page), ret); > + dump_page(page, NULL); > + } ./include/linux/kern_levels.h:5:18: warning: too many arguments for format [-Wformat-extra-args] #define KERN_SOH "\001" /* ASCII Start Of Header */ ^ ./include/linux/kern_levels.h:12:22: note: in expansion of macro ‘KERN_SOH’ #define KERN_WARNING KERN_SOH "4" /* warning conditions */ ^~~~~~~~ ./include/linux/printk.h:310:9: note: in expansion of macro ‘KERN_WARNING’ printk(KERN_WARNING pr_fmt(fmt), ##__VA_ARGS__) ^~~~~~~~~~~~ ./include/linux/printk.h:311:17: note: in expansion of macro ‘pr_warning’ #define pr_warn pr_warning ^~~~~~~~~~ mm/memory_hotplug.c:1414:5: note: in expansion of macro ‘pr_warn’ pr_warn("migrating pfn %lx failed ", ^~~~~~~ --- a/mm/memory_hotplug.c~mm-memory_hotplug-be-more-verbose-for-memory-offline-failures-fix +++ a/mm/memory_hotplug.c @@ -1411,7 +1411,7 @@ do_migrate_range(unsigned long start_pfn MIGRATE_SYNC, MR_MEMORY_HOTPLUG); if (ret) { list_for_each_entry(page, &source, lru) { - pr_warn("migrating pfn %lx failed ", + pr_warn("migrating pfn %lx failed: %d", page_to_pfn(page), ret); dump_page(page, NULL); }
On Thu 15-11-18 16:07:16, Andrew Morton wrote: > On Wed, 7 Nov 2018 11:18:30 +0100 Michal Hocko <mhocko@kernel.org> wrote: > > > From: Michal Hocko <mhocko@suse.com> > > > > There is only very limited information printed when the memory offlining > > fails: > > [ 1984.506184] rac1 kernel: memory offlining [mem 0x82600000000-0x8267fffffff] failed due to signal backoff > > > > This tells us that the failure is triggered by the userspace > > intervention but it doesn't tell us much more about the underlying > > reason. It might be that the page migration failes repeatedly and the > > userspace timeout expires and send a signal or it might be some of the > > earlier steps (isolation, memory notifier) takes too long. > > > > If the migration failes then it would be really helpful to see which > > page that and its state. The same applies to the isolation phase. If we > > fail to isolate a page from the allocator then knowing the state of the > > page would be helpful as well. > > > > Dump the page state that fails to get isolated or migrated. This will > > tell us more about the failure and what to focus on during debugging. > > > > ... > > > > --- a/mm/memory_hotplug.c > > +++ b/mm/memory_hotplug.c > > @@ -1388,10 +1388,8 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > > page_is_file_cache(page)); > > > > } else { > > -#ifdef CONFIG_DEBUG_VM > > - pr_alert("failed to isolate pfn %lx\n", pfn); > > + pr_warn("failed to isolate pfn %lx\n", pfn); > > dump_page(page, "isolation failed"); > > -#endif > > put_page(page); > > /* Because we don't have big zone->lock. we should > > check this again here. */ > > @@ -1411,8 +1409,14 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > > /* Allocate a new page from the nearest neighbor node */ > > ret = migrate_pages(&source, new_node_page, NULL, 0, > > MIGRATE_SYNC, MR_MEMORY_HOTPLUG); > > - if (ret) > > + if (ret) { > > + list_for_each_entry(page, &source, lru) { > > + pr_warn("migrating pfn %lx failed ", > > + page_to_pfn(page), ret); > > + dump_page(page, NULL); > > + } > > ./include/linux/kern_levels.h:5:18: warning: too many arguments for format [-Wformat-extra-args] > #define KERN_SOH "\001" /* ASCII Start Of Header */ > ^ > ./include/linux/kern_levels.h:12:22: note: in expansion of macro ‘KERN_SOH’ > #define KERN_WARNING KERN_SOH "4" /* warning conditions */ > ^~~~~~~~ > ./include/linux/printk.h:310:9: note: in expansion of macro ‘KERN_WARNING’ > printk(KERN_WARNING pr_fmt(fmt), ##__VA_ARGS__) > ^~~~~~~~~~~~ > ./include/linux/printk.h:311:17: note: in expansion of macro ‘pr_warning’ > #define pr_warn pr_warning > ^~~~~~~~~~ > mm/memory_hotplug.c:1414:5: note: in expansion of macro ‘pr_warn’ > pr_warn("migrating pfn %lx failed ", > ^~~~~~~ yeah, 0day already complained and I've posted a follow up fix http://lkml.kernel.org/r/20181108081231.GN27423@dhcp22.suse.cz Let me post a version 2 with all the fixups. Thanks! > --- a/mm/memory_hotplug.c~mm-memory_hotplug-be-more-verbose-for-memory-offline-failures-fix > +++ a/mm/memory_hotplug.c > @@ -1411,7 +1411,7 @@ do_migrate_range(unsigned long start_pfn > MIGRATE_SYNC, MR_MEMORY_HOTPLUG); > if (ret) { > list_for_each_entry(page, &source, lru) { > - pr_warn("migrating pfn %lx failed ", > + pr_warn("migrating pfn %lx failed: %d", > page_to_pfn(page), ret); > dump_page(page, NULL); > } >
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 1badac89c58e..bf214beccda3 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1388,10 +1388,8 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) page_is_file_cache(page)); } else { -#ifdef CONFIG_DEBUG_VM - pr_alert("failed to isolate pfn %lx\n", pfn); + pr_warn("failed to isolate pfn %lx\n", pfn); dump_page(page, "isolation failed"); -#endif put_page(page); /* Because we don't have big zone->lock. we should check this again here. */ @@ -1411,8 +1409,14 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) /* Allocate a new page from the nearest neighbor node */ ret = migrate_pages(&source, new_node_page, NULL, 0, MIGRATE_SYNC, MR_MEMORY_HOTPLUG); - if (ret) + if (ret) { + list_for_each_entry(page, &source, lru) { + pr_warn("migrating pfn %lx failed ", + page_to_pfn(page), ret); + dump_page(page, NULL); + } putback_movable_pages(&source); + } } out: return ret; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a919ba5cb3c8..23267767bf98 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7845,6 +7845,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, return false; unmovable: WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); + dump_page(pfn_to_page(pfn+iter), "has_unmovable_pages"); return true; }