Message ID | 20181116083020.20260-6-mhocko@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm, memory_hotplug: improve memory offlining failures debugging | expand |
On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote: > From: Michal Hocko <mhocko@suse.com> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index a919ba5cb3c8..ec2c7916dc2d 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -7845,6 +7845,7 @@ bool has_unmovable_pages(struct zone *zone, > struct page *page, int count, > return false; > unmovable: > WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); > + dump_page(pfn_to_page(pfn+iter), "unmovable page"); Would not be enough to just do: dump_page(page, "unmovable page". Unless I am missing something, page should already have the right pfn? <--- unsigned long check = pfn + iter; page = pfn_to_page(check); ---> The rest looks good to me Reviewed-by: Oscar Salvador <osalvador@suse.de>
On Fri 16-11-18 11:47:01, osalvador wrote: > On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote: > > From: Michal Hocko <mhocko@suse.com> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index a919ba5cb3c8..ec2c7916dc2d 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -7845,6 +7845,7 @@ bool has_unmovable_pages(struct zone *zone, > > struct page *page, int count, > > return false; > > unmovable: > > WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); > > + dump_page(pfn_to_page(pfn+iter), "unmovable page"); > > Would not be enough to just do: > > dump_page(page, "unmovable page". > > Unless I am missing something, page should already have the > right pfn? What if pfn_valid_within fails? You could have a pointer to the previous page. > > <--- > unsigned long check = pfn + iter; > page = pfn_to_page(check); > ---> > > The rest looks good to me > > Reviewed-by: Oscar Salvador <osalvador@suse.de> Thanks!
On 11/16/2018 02:00 PM, Michal Hocko wrote: > From: Michal Hocko <mhocko@suse.com> > > There is only very limited information printed when the memory offlining > fails: > [ 1984.506184] rac1 kernel: memory offlining [mem 0x82600000000-0x8267fffffff] failed due to signal backoff > > This tells us that the failure is triggered by the userspace > intervention but it doesn't tell us much more about the underlying > reason. It might be that the page migration failes repeatedly and the > userspace timeout expires and send a signal or it might be some of the > earlier steps (isolation, memory notifier) takes too long. > > If the migration failes then it would be really helpful to see which > page that and its state. The same applies to the isolation phase. If we > fail to isolate a page from the allocator then knowing the state of the > page would be helpful as well. > > Dump the page state that fails to get isolated or migrated. This will > tell us more about the failure and what to focus on during debugging. > > Signed-off-by: Michal Hocko <mhocko@suse.com> > --- > mm/memory_hotplug.c | 12 ++++++++---- > mm/page_alloc.c | 1 + > 2 files changed, 9 insertions(+), 4 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 88d50e74e3fe..c82193db4be6 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -1388,10 +1388,8 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > page_is_file_cache(page)); > > } else { > -#ifdef CONFIG_DEBUG_VM > - pr_alert("failed to isolate pfn %lx\n", pfn); > + pr_warn("failed to isolate pfn %lx\n", pfn); > dump_page(page, "isolation failed"); > -#endif > put_page(page); > /* Because we don't have big zone->lock. we should > check this again here. */ > @@ -1411,8 +1409,14 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > /* Allocate a new page from the nearest neighbor node */ > ret = migrate_pages(&source, new_node_page, NULL, 0, > MIGRATE_SYNC, MR_MEMORY_HOTPLUG); > - if (ret) > + if (ret) { > + list_for_each_entry(page, &source, lru) { > + pr_warn("migrating pfn %lx failed ret:%d ", > + page_to_pfn(page), ret); > + dump_page(page, "migration failure"); > + } > putback_movable_pages(&source); > + } > } > out: > return ret; > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index a919ba5cb3c8..ec2c7916dc2d 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -7845,6 +7845,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, > return false; > unmovable: > WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); > + dump_page(pfn_to_page(pfn+iter), "unmovable page"); > return true; > } This seems to have fixed the previous build problem because of the migrate_pages() return code. Otherwise looks good. Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
On Fri, 2018-11-16 at 12:22 +0100, Michal Hocko wrote: > On Fri 16-11-18 11:47:01, osalvador wrote: > > On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote: > > > From: Michal Hocko <mhocko@suse.com> > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > > index a919ba5cb3c8..ec2c7916dc2d 100644 > > > --- a/mm/page_alloc.c > > > +++ b/mm/page_alloc.c > > > @@ -7845,6 +7845,7 @@ bool has_unmovable_pages(struct zone *zone, > > > struct page *page, int count, > > > return false; > > > unmovable: > > > WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); > > > + dump_page(pfn_to_page(pfn+iter), "unmovable page"); > > > > Would not be enough to just do: > > > > dump_page(page, "unmovable page". > > > > Unless I am missing something, page should already have the > > right pfn? > > What if pfn_valid_within fails? You could have a pointer to the > previous > page. Sorry, I missed that, you are right. > > > > <--- > > unsigned long check = pfn + iter; > > page = pfn_to_page(check); > > ---> > > > > The rest looks good to me > > > > Reviewed-by: Oscar Salvador <osalvador@suse.de> > > Thanks! >
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 88d50e74e3fe..c82193db4be6 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1388,10 +1388,8 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) page_is_file_cache(page)); } else { -#ifdef CONFIG_DEBUG_VM - pr_alert("failed to isolate pfn %lx\n", pfn); + pr_warn("failed to isolate pfn %lx\n", pfn); dump_page(page, "isolation failed"); -#endif put_page(page); /* Because we don't have big zone->lock. we should check this again here. */ @@ -1411,8 +1409,14 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) /* Allocate a new page from the nearest neighbor node */ ret = migrate_pages(&source, new_node_page, NULL, 0, MIGRATE_SYNC, MR_MEMORY_HOTPLUG); - if (ret) + if (ret) { + list_for_each_entry(page, &source, lru) { + pr_warn("migrating pfn %lx failed ret:%d ", + page_to_pfn(page), ret); + dump_page(page, "migration failure"); + } putback_movable_pages(&source); + } } out: return ret; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a919ba5cb3c8..ec2c7916dc2d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7845,6 +7845,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, return false; unmovable: WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); + dump_page(pfn_to_page(pfn+iter), "unmovable page"); return true; }