Message ID | 20210512031057.13580-3-wangkefeng.wang@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/3] mm/page_alloc: Use dump_page() directly | expand |
On Wed, May 12, 2021 at 11:10:57AM +0800, Kefeng Wang wrote: > If page is poisoned, it will crash when we call some page related > functions, so must check whether the page is poisoned or not firstly. https://lore.kernel.org/linux-mm/20210430145549.2662354-4-willy@infradead.org/ > Fixes: 6197ab984b41 ("mm: improve dump_page() for compound pages") > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> > --- > mm/debug.c | 15 +++++++++------ > 1 file changed, 9 insertions(+), 6 deletions(-) > > diff --git a/mm/debug.c b/mm/debug.c > index 84cdcd0f7bd3..cf84cd9df527 100644 > --- a/mm/debug.c > +++ b/mm/debug.c > @@ -44,20 +44,19 @@ const struct trace_print_flags vmaflag_names[] = { > > static void __dump_page(struct page *page, const char *reason) > { > - struct page *head = compound_head(page); > + struct page *head = NULL; > struct address_space *mapping; > - bool page_poisoned = PagePoisoned(page); > - bool compound = PageCompound(page); > + bool compound; > /* > * Accessing the pageblock without the zone lock. It could change to > * "isolate" again in the meantime, but since we are just dumping the > * state for debugging, it should be fine to accept a bit of > * inaccuracy here due to racing. > */ > - bool page_cma = is_migrate_cma_page(page); > + bool page_cma; > int mapcount; > char *type = ""; > - > + bool page_poisoned = PagePoisoned(page); > /* > * If struct page is poisoned don't access Page*() functions as that > * leads to recursive loop. Page*() check for poisoned pages, and calls > @@ -68,6 +67,10 @@ static void __dump_page(struct page *page, const char *reason) > goto hex_only; > } > > + head = compound_head(page); > + page_poisoned = PagePoisoned(page); > + page_cma = is_migrate_cma_page(page); > + > if (page < head || (page >= head + MAX_ORDER_NR_PAGES)) { > /* > * Corrupt page, so we cannot call page_mapping. Instead, do a > @@ -178,7 +181,7 @@ static void __dump_page(struct page *page, const char *reason) > print_hex_dump(KERN_WARNING, "raw: ", DUMP_PREFIX_NONE, 32, > sizeof(unsigned long), page, > sizeof(struct page), false); > - if (head != page) > + if (head && head != page) > print_hex_dump(KERN_WARNING, "head: ", DUMP_PREFIX_NONE, 32, > sizeof(unsigned long), head, > sizeof(struct page), false); > -- > 2.26.2 > >
On 2021/5/12 11:15, Matthew Wilcox wrote: > On Wed, May 12, 2021 at 11:10:57AM +0800, Kefeng Wang wrote: >> If page is poisoned, it will crash when we call some page related >> functions, so must check whether the page is poisoned or not firstly. > > https://lore.kernel.org/linux-mm/20210430145549.2662354-4-willy@infradead.org/ OK, please ignore them, I found this when debug arm issue in lts5.10, thanks. > >> Fixes: 6197ab984b41 ("mm: improve dump_page() for compound pages") >> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> >> --- >> mm/debug.c | 15 +++++++++------ >> 1 file changed, 9 insertions(+), 6 deletions(-) >> >> diff --git a/mm/debug.c b/mm/debug.c >> index 84cdcd0f7bd3..cf84cd9df527 100644 >> --- a/mm/debug.c >> +++ b/mm/debug.c >> @@ -44,20 +44,19 @@ const struct trace_print_flags vmaflag_names[] = { >> >> static void __dump_page(struct page *page, const char *reason) >> { >> - struct page *head = compound_head(page); >> + struct page *head = NULL; >> struct address_space *mapping; >> - bool page_poisoned = PagePoisoned(page); >> - bool compound = PageCompound(page); >> + bool compound; >> /* >> * Accessing the pageblock without the zone lock. It could change to >> * "isolate" again in the meantime, but since we are just dumping the >> * state for debugging, it should be fine to accept a bit of >> * inaccuracy here due to racing. >> */ >> - bool page_cma = is_migrate_cma_page(page); >> + bool page_cma; >> int mapcount; >> char *type = ""; >> - >> + bool page_poisoned = PagePoisoned(page); >> /* >> * If struct page is poisoned don't access Page*() functions as that >> * leads to recursive loop. Page*() check for poisoned pages, and calls >> @@ -68,6 +67,10 @@ static void __dump_page(struct page *page, const char *reason) >> goto hex_only; >> } >> >> + head = compound_head(page); >> + page_poisoned = PagePoisoned(page); >> + page_cma = is_migrate_cma_page(page); >> + >> if (page < head || (page >= head + MAX_ORDER_NR_PAGES)) { >> /* >> * Corrupt page, so we cannot call page_mapping. Instead, do a >> @@ -178,7 +181,7 @@ static void __dump_page(struct page *page, const char *reason) >> print_hex_dump(KERN_WARNING, "raw: ", DUMP_PREFIX_NONE, 32, >> sizeof(unsigned long), page, >> sizeof(struct page), false); >> - if (head != page) >> + if (head && head != page) >> print_hex_dump(KERN_WARNING, "head: ", DUMP_PREFIX_NONE, 32, >> sizeof(unsigned long), head, >> sizeof(struct page), false); >> -- >> 2.26.2 >> >> > . >
diff --git a/mm/debug.c b/mm/debug.c index 84cdcd0f7bd3..cf84cd9df527 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -44,20 +44,19 @@ const struct trace_print_flags vmaflag_names[] = { static void __dump_page(struct page *page, const char *reason) { - struct page *head = compound_head(page); + struct page *head = NULL; struct address_space *mapping; - bool page_poisoned = PagePoisoned(page); - bool compound = PageCompound(page); + bool compound; /* * Accessing the pageblock without the zone lock. It could change to * "isolate" again in the meantime, but since we are just dumping the * state for debugging, it should be fine to accept a bit of * inaccuracy here due to racing. */ - bool page_cma = is_migrate_cma_page(page); + bool page_cma; int mapcount; char *type = ""; - + bool page_poisoned = PagePoisoned(page); /* * If struct page is poisoned don't access Page*() functions as that * leads to recursive loop. Page*() check for poisoned pages, and calls @@ -68,6 +67,10 @@ static void __dump_page(struct page *page, const char *reason) goto hex_only; } + head = compound_head(page); + page_poisoned = PagePoisoned(page); + page_cma = is_migrate_cma_page(page); + if (page < head || (page >= head + MAX_ORDER_NR_PAGES)) { /* * Corrupt page, so we cannot call page_mapping. Instead, do a @@ -178,7 +181,7 @@ static void __dump_page(struct page *page, const char *reason) print_hex_dump(KERN_WARNING, "raw: ", DUMP_PREFIX_NONE, 32, sizeof(unsigned long), page, sizeof(struct page), false); - if (head != page) + if (head && head != page) print_hex_dump(KERN_WARNING, "head: ", DUMP_PREFIX_NONE, 32, sizeof(unsigned long), head, sizeof(struct page), false);
If page is poisoned, it will crash when we call some page related functions, so must check whether the page is poisoned or not firstly. Fixes: 6197ab984b41 ("mm: improve dump_page() for compound pages") Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> --- mm/debug.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)