diff mbox series

[v2] mm/page_alloc: Fix sleeping function called in case of irqsdisable

Message ID 20210706075754.10726-1-qiang.zhang@windriver.com (mailing list archive)
State New
Headers show
Series [v2] mm/page_alloc: Fix sleeping function called in case of irqsdisable | expand

Commit Message

Zhang, Qiang July 6, 2021, 7:57 a.m. UTC
From: Zqiang <qiang.zhang@windriver.com>

BUG: sleeping function called from invalid context at mm/page_alloc.c:5179
in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 1, name: swapper/0
.....
__dump_stack lib/dump_stack.c:79 [inline]
 dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:96
 ___might_sleep.cold+0x1f1/0x237 kernel/sched/core.c:9153
 prepare_alloc_pages+0x3da/0x580 mm/page_alloc.c:5179
 __alloc_pages+0x12f/0x500 mm/page_alloc.c:5375
 alloc_page_interleave+0x1e/0x200 mm/mempolicy.c:2147
 alloc_pages+0x238/0x2a0 mm/mempolicy.c:2270
 stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303
 save_stack+0x15e/0x1e0 mm/page_owner.c:120
 __set_page_owner+0x50/0x290 mm/page_owner.c:181
 prep_new_page mm/page_alloc.c:2445 [inline]
 __alloc_pages_bulk+0x8b9/0x1870 mm/page_alloc.c:5313
 alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline]
 vm_area_alloc_pages mm/vmalloc.c:2775 [inline]
 __vmalloc_area_node mm/vmalloc.c:2845 [inline]
 __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2947
 __vmalloc_node mm/vmalloc.c:2996 [inline]
 vzalloc+0x67/0x80 mm/vmalloc.c:3066

If the PAGE_OWNER is enabled, in __set_page_owner(), the pages will be
allocated to save calltrace info, due to the allocated action is executed
under irq disable(pagesets.lock be held), if the gfp variable contains
the flag that causes sleep, will trigger above information. the
prep_new_page() is not need to disable irq for protection, fix it through
enable irq before call prep_new_page().

Fixes: 0f87d9d30f21 ("mm/page_alloc: add an array-based interface to the bulk page allocator")
Reported-by: syzbot+0123a2b8f9e623d5b443@syzkaller.appspotmail.com
Suggested-by: Muchun Song <songmuchun@bytedance.com>
Signed-off-by: Zqiang <qiang.zhang@windriver.com>
---
 v1->v2:
 beacuse when acquire local_lock again, the current task may be run on another CPU,
 The @pcp and @pcp_list need to be reloaded.

 mm/page_alloc.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

Comments

Mel Gorman July 6, 2021, 8:33 a.m. UTC | #1
On Tue, Jul 06, 2021 at 03:57:54PM +0800, qiang.zhang@windriver.com wrote:
> From: Zqiang <qiang.zhang@windriver.com>
> 
> BUG: sleeping function called from invalid context at mm/page_alloc.c:5179
> in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 1, name: swapper/0
> .....
> __dump_stack lib/dump_stack.c:79 [inline]
>  dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:96
>  ___might_sleep.cold+0x1f1/0x237 kernel/sched/core.c:9153
>  prepare_alloc_pages+0x3da/0x580 mm/page_alloc.c:5179
>  __alloc_pages+0x12f/0x500 mm/page_alloc.c:5375
>  alloc_page_interleave+0x1e/0x200 mm/mempolicy.c:2147
>  alloc_pages+0x238/0x2a0 mm/mempolicy.c:2270
>  stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303
>  save_stack+0x15e/0x1e0 mm/page_owner.c:120
>  __set_page_owner+0x50/0x290 mm/page_owner.c:181
>  prep_new_page mm/page_alloc.c:2445 [inline]
>  __alloc_pages_bulk+0x8b9/0x1870 mm/page_alloc.c:5313
>  alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline]
>  vm_area_alloc_pages mm/vmalloc.c:2775 [inline]
>  __vmalloc_area_node mm/vmalloc.c:2845 [inline]
>  __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2947
>  __vmalloc_node mm/vmalloc.c:2996 [inline]
>  vzalloc+0x67/0x80 mm/vmalloc.c:3066
> 
> If the PAGE_OWNER is enabled, in __set_page_owner(), the pages will be
> allocated to save calltrace info, due to the allocated action is executed
> under irq disable(pagesets.lock be held), if the gfp variable contains
> the flag that causes sleep, will trigger above information. the
> prep_new_page() is not need to disable irq for protection, fix it through
> enable irq before call prep_new_page().
> 
> Fixes: 0f87d9d30f21 ("mm/page_alloc: add an array-based interface to the bulk page allocator")
> Reported-by: syzbot+0123a2b8f9e623d5b443@syzkaller.appspotmail.com
> Suggested-by: Muchun Song <songmuchun@bytedance.com>
> Signed-off-by: Zqiang <qiang.zhang@windriver.com>

Same comment as v1 with respect to the impact of enabling/disabling IRQs
for each page allocated -- it hurts performance regardless of whether
page owner is enabled or not. If returning a single page is undesirable
then a slightly different alternative is to only enable IRQs if page
owner is set and then goto "Attempt the batch allocation" to reacquire
the lock and lookup pcp.
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d6e94cc8066c..9adbc0a20938 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5286,11 +5286,6 @@  unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
 	if (unlikely(!zone))
 		goto failed;
 
-	/* Attempt the batch allocation */
-	local_lock_irqsave(&pagesets.lock, flags);
-	pcp = this_cpu_ptr(zone->per_cpu_pageset);
-	pcp_list = &pcp->lists[order_to_pindex(ac.migratetype, 0)];
-
 	while (nr_populated < nr_pages) {
 
 		/* Skip existing pages */
@@ -5299,14 +5294,23 @@  unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
 			continue;
 		}
 
+		/* Attempt the batch allocation */
+		local_lock_irqsave(&pagesets.lock, flags);
+		pcp = this_cpu_ptr(zone->per_cpu_pageset);
+		pcp_list = &pcp->lists[order_to_pindex(ac.migratetype, 0)];
+
 		page = __rmqueue_pcplist(zone, 0, ac.migratetype, alloc_flags,
 								pcp, pcp_list);
 		if (unlikely(!page)) {
 			/* Try and get at least one page */
 			if (!nr_populated)
 				goto failed_irq;
+
+			local_unlock_irqrestore(&pagesets.lock, flags);
 			break;
 		}
+
+		local_unlock_irqrestore(&pagesets.lock, flags);
 		nr_account++;
 
 		prep_new_page(page, 0, gfp, 0);
@@ -5317,8 +5321,6 @@  unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
 		nr_populated++;
 	}
 
-	local_unlock_irqrestore(&pagesets.lock, flags);
-
 	__count_zid_vm_events(PGALLOC, zone_idx(zone), nr_account);
 	zone_statistics(ac.preferred_zoneref->zone, zone, nr_account);