Message ID | 20200910183318.20139-4-willy@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Return head pages from find_*_entry | expand |
On Thu, 2020-09-10 at 19:33 +0100, Matthew Wilcox (Oracle) wrote: > Instead of calling find_get_entry() for every page index, use an XArray > iterator to skip over NULL entries, and avoid calling get_page(), > because we only want the swap entries. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Acked-by: Johannes Weiner <hannes@cmpxchg.org> Reverting the "Return head pages from find_*_entry" patchset [1] up to this patch fixed the issue that LTP madvise06 test [2] would trigger endless soft- lockups below. It does not help after applied patches fixed other separate issues in the patchset [3][4]. [1] https://lore.kernel.org/intel-gfx/20200910183318.20139-1-willy@infradead.org/ [2] https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/madvise/madvise06.c [3] https://lore.kernel.org/intel-gfx/20200914112738.GM6583@casper.infradead.org/ [4] https://lore.kernel.org/lkml/20200914115559.GN6583@casper.infradead.org/ [ 2653.179563][ C4] CPU: 4 PID: 23320 Comm: madvise06 Not tainted 5.9.0-rc5-next-20200914+ #2 [ 2653.220176][ C4] Hardware name: HP ProLiant BL660c Gen9, BIOS I38 10/17/2018 [ 2653.254908][ C4] RIP: 0010:lock_acquire+0x211/0x8e0 [ 2653.278534][ C4] Code: 83 c0 03 38 d0 7c 08 84 d2 0f 85 3a 05 00 00 8b 85 04 08 00 00 83 e8 01 89 85 04 08 00 00 66 85 c0 0f 85 9a 04 00 00 41 52 9d <48> b8 00 00 00 00 00 fc ff df 48 01 c3 c7 03 00 00 00 00 c7 43 08 [ 2653.369929][ C4] RSP: 0018:ffffc9000e1bf9f0 EFLAGS: 00000246 [ 2653.399398][ C4] RAX: 0000000000000000 RBX: 1ffff92001c37f41 RCX: 1ffff92001c37f27 [ 2653.437720][ C4] RDX: 0000000000000000 RSI: 0000000029956a3e RDI: ffff889042f40844 [ 2653.475829][ C4] RBP: ffff889042f40040 R08: fffffbfff5083905 R09: fffffbfff5083905 [ 2653.511611][ C4] R10: 0000000000000246 R11: fffffbfff5083904 R12: ffffffffa74ce320 [ 2653.547396][ C4] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 [ 2653.582938][ C4] FS: 00007f1fc85e4600(0000) GS:ffff88881e100000(0000) knlGS:0000000000000000 [ 2653.622910][ C4] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 2653.652310][ C4] CR2: 0000000000620050 CR3: 000000054d438002 CR4: 00000000001706e0 [ 2653.688228][ C4] Call Trace: [ 2653.702537][ C4] ? rcu_read_unlock+0x40/0x40 [ 2653.723647][ C4] ? find_held_lock+0x33/0x1c0 [ 2653.744708][ C4] ? __read_swap_cache_async+0x18f/0x870 [ 2653.770547][ C4] get_swap_device+0xf5/0x280 rcu_read_lock at include/linux/rcupdate.h:642 (inlined by) get_swap_device at mm/swapfile.c:1303 [ 2653.791303][ C4] ? get_swap_device+0xce/0x280 [ 2653.812693][ C4] ? swap_page_trans_huge_swapped+0x2a0/0x2a0 [ 2653.839963][ C4] __read_swap_cache_async+0x10c/0x870 __read_swap_cache_async at mm/swap_state.c:469 [ 2653.864243][ C4] ? rcu_read_lock_sched_held+0x9c/0xd0 [ 2653.890657][ C4] ? find_get_incore_page+0x220/0x220 [ 2653.916978][ C4] ? rcu_read_lock_held+0x9c/0xb0 [ 2653.940235][ C4] ? find_held_lock+0x33/0x1c0 [ 2653.961325][ C4] ? do_madvise.part.30+0xd11/0x1b70 [ 2653.984922][ C4] ? lock_downgrade+0x730/0x730 [ 2654.006502][ C4] read_swap_cache_async+0x60/0xb0 read_swap_cache_async at mm/swap_state.c:564 [ 2654.029694][ C4] ? __read_swap_cache_async+0x870/0x870 [ 2654.055486][ C4] ? xas_find+0x410/0x6c0 [ 2654.074663][ C4] do_madvise.part.30+0xd47/0x1b70 force_shm_swapin_readahead at mm/madvise.c:243 (inlined by) madvise_willneed at mm/madvise.c:277 (inlined by) madvise_vma at mm/madvise.c:939 (inlined by) do_madvise at mm/madvise.c:1142 [ 2654.097959][ C4] ? find_held_lock+0x33/0x1c0 [ 2654.119031][ C4] ? swapin_walk_pmd_entry+0x430/0x430 [ 2654.143518][ C4] ? down_read_nested+0x420/0x420 [ 2654.165748][ C4] ? rcu_read_lock_sched_held+0x9c/0xd0 [ 2654.190523][ C4] ? __x64_sys_madvise+0xa1/0x110 [ 2654.212973][ C4] __x64_sys_madvise+0xa1/0x110 [ 2654.233976][ C4] ? syscall_enter_from_user_mode+0x1c/0x50 [ 2654.260983][ C4] do_syscall_64+0x33/0x40 [ 2654.281132][ C4] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 2654.307623][ C4] RIP: 0033:0x7f1fc80fca6b [ 2654.327125][ C4] Code: 64 89 02 b8 ff ff ff ff c3 48 8b 15 17 54 2c 00 f7 d8 64 89 02 b8 ff ff ff ff eb bc 0f 1f 00 f3 0f 1e fa b8 1c 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ed 53 2c 00 f7 d8 64 89 01 48 [ 2654.420246][ C4] RSP: 002b:00007fff53609998 EFLAGS: 00000202 ORIG_RAX: 000000000000001c [ 2654.458926][ C4] RAX: ffffffffffffffda RBX: 00007f1fc85e4580 RCX: 00007f1fc80fca6b [ 2654.494295][ C4] RDX: 0000000000000003 RSI: 0000000019000000 RDI: 00007f1faf006000 [ 2654.530104][ C4] RBP: 00007f1faf006000 R08: 0000000000000000 R09: 00007fff53609284 [ 2654.566057][ C4] R10: 0000000000000003 R11: 0000000000000202 R12: 0000000000000000 [ 2654.601697][ C4] R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000000 ... [ 2846.587644][ T353] Showing all locks held in the system: [ 2846.622367][ T353] 1 lock held by khungtaskd/353: [ 2846.644378][ T353] #0: ffffffffa74ce320 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire.constprop.51+0x0/0x30 [ 2846.695738][ T353] 1 lock held by khugepaged/361: [ 2846.718056][ T353] #0: ffffffffa75418e8 (lock#4){+.+.}-{3:3}, at: lru_add_drain_all+0x55/0x5f0 [ 2846.758184][ T353] 1 lock held by madvise06/23320: [ 2846.780486][ T353] [ 2846.790445][ T353] ============================================= > --- > mm/madvise.c | 21 ++++++++++++--------- > 1 file changed, 12 insertions(+), 9 deletions(-) > > diff --git a/mm/madvise.c b/mm/madvise.c > index dd1d43cf026d..96189acd6969 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -224,25 +224,28 @@ static void force_shm_swapin_readahead(struct > vm_area_struct *vma, > unsigned long start, unsigned long end, > struct address_space *mapping) > { > - pgoff_t index; > + XA_STATE(xas, &mapping->i_pages, linear_page_index(vma, start)); > + pgoff_t end_index = end / PAGE_SIZE; > struct page *page; > - swp_entry_t swap; > > - for (; start < end; start += PAGE_SIZE) { > - index = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; > + rcu_read_lock(); > + xas_for_each(&xas, page, end_index) { > + swp_entry_t swap; > > - page = find_get_entry(mapping, index); > - if (!xa_is_value(page)) { > - if (page) > - put_page(page); > + if (!xa_is_value(page)) > continue; > - } > + rcu_read_unlock(); > + > swap = radix_to_swp_entry(page); > page = read_swap_cache_async(swap, GFP_HIGHUSER_MOVABLE, > NULL, 0, false); > if (page) > put_page(page); > + > + rcu_read_lock(); > + xas_reset(&xas); > } > + rcu_read_unlock(); > > lru_add_drain(); /* Push any new pages onto the LRU now */ > }
On Mon, 2020-09-14 at 12:17 -0400, Qian Cai wrote: > On Thu, 2020-09-10 at 19:33 +0100, Matthew Wilcox (Oracle) wrote: > > Instead of calling find_get_entry() for every page index, use an XArray > > iterator to skip over NULL entries, and avoid calling get_page(), > > because we only want the swap entries. > > > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > > Acked-by: Johannes Weiner <hannes@cmpxchg.org> > > Reverting the "Return head pages from find_*_entry" patchset [1] up to this > patch fixed the issue that LTP madvise06 test [2] would trigger endless soft- > lockups below. It does not help after applied patches fixed other separate > issues in the patchset [3][4]. Forgot to send this piece of RCU stall traces as well which might help debugging. 00: [ 2852.137748] madvise06 (62712): drop_caches: 3 01: [ 2928.208367] rcu: INFO: rcu_sched self-detected stall on CPU 01: [ 2928.210083] rcu: 1-....: (6499 ticks this GP) idle=036/1/0x4000000000 01: 000002 softirq=1741392/1741392 fqs=3161 01: [ 2928.210610] (t=6500 jiffies g=610849 q=12529) 01: [ 2928.210620] Task dump for CPU 1: 01: [ 2928.210630] task:madvise06 state:R running task stack:53320 pi 01: d:62712 ppid: 62711 flags:0x00000004 01: [ 2928.210676] Call Trace: 01: [ 2928.210693] [<00000000af57ec88>] show_stack+0x158/0x1f0 01: [ 2928.210703] [<00000000ae55b692>] sched_show_task+0x3d2/0x4c8 01: [ 2928.210710] [<00000000af5846aa>] rcu_dump_cpu_stacks+0x26a/0x2a8 01: [ 2928.210718] [<00000000ae64fa62>] rcu_sched_clock_irq+0x1c92/0x2188 01: [ 2928.210726] [<00000000ae6662ee>] update_process_times+0x4e/0x148 01: [ 2928.210734] [<00000000ae690c26>] tick_sched_timer+0x86/0x188 01: [ 2928.210741] [<00000000ae66989c>] __hrtimer_run_queues+0x84c/0x10b8 01: [ 2928.210748] [<00000000ae66c80a>] hrtimer_interrupt+0x38a/0x860 01: [ 2928.210758] [<00000000ae48dbf2>] do_IRQ+0x152/0x1c8 01: [ 2928.210767] [<00000000af5b00ea>] ext_int_handler+0x18e/0x194 01: [ 2928.210774] [<00000000ae5e332e>] arch_local_irq_restore+0x86/0xa0 01: [ 2928.210782] [<00000000af58da04>] lock_is_held_type+0xe4/0x130 01: [ 2928.210791] [<00000000ae63355a>] rcu_read_lock_held+0xba/0xd8 01: [ 2928.210799] [<00000000af0125fc>] xas_descend+0x244/0x2c8 01: [ 2928.210806] [<00000000af012754>] xas_load+0xd4/0x148 01: [ 2928.210812] [<00000000af014490>] xas_find+0x5d0/0x818 01: [ 2928.210822] [<00000000ae97e644>] do_madvise+0xd5c/0x1600 01: [ 2928.210828] [<00000000ae97f2d2>] __s390x_sys_madvise+0x72/0x98 01: [ 2928.210835] [<00000000af5af844>] system_call+0xdc/0x278 01: [ 2928.210841] 3 locks held by madvise06/62712: 01: [ 2928.216406] #0: 00000001437fca18 (&mm->mmap_lock){++++}-{3:3}, at: do_m 01: dvise+0x18c/0x1600 01: [ 2928.216430] #1: 00000000afbdd3e0 (rcu_read_lock){....}-{1:2}, at: do_mad 01: vise+0xe72/0x1600 01: [ 2928.216449] #2: 00000000afbe0818 (rcu_node_1){-.-.}-{2:2}, at: rcu_dump_ 01: cpu_stacks+0xb2/0x2a8
On Mon, Sep 14, 2020 at 12:17:07PM -0400, Qian Cai wrote: > Reverting the "Return head pages from find_*_entry" patchset [1] up to this > patch fixed the issue that LTP madvise06 test [2] would trigger endless soft- > lockups below. It does not help after applied patches fixed other separate > issues in the patchset [3][4]. Thanks for the report. Could you try this? diff --git a/mm/madvise.c b/mm/madvise.c index 96189acd6969..2d9ceccb338d 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -234,6 +234,7 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma, if (!xa_is_value(page)) continue; + xas_pause(&xas); rcu_read_unlock(); swap = radix_to_swp_entry(page); @@ -243,7 +244,6 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma, put_page(page); rcu_read_lock(); - xas_reset(&xas); } rcu_read_unlock();
On Mon, 2020-09-14 at 17:50 +0100, Matthew Wilcox wrote: > On Mon, Sep 14, 2020 at 12:17:07PM -0400, Qian Cai wrote: > > Reverting the "Return head pages from find_*_entry" patchset [1] up to this > > patch fixed the issue that LTP madvise06 test [2] would trigger endless > > soft- > > lockups below. It does not help after applied patches fixed other separate > > issues in the patchset [3][4]. > > Thanks for the report. Could you try this? It works fine. > > diff --git a/mm/madvise.c b/mm/madvise.c > index 96189acd6969..2d9ceccb338d 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -234,6 +234,7 @@ static void force_shm_swapin_readahead(struct > vm_area_struct *vma, > > if (!xa_is_value(page)) > continue; > + xas_pause(&xas); > rcu_read_unlock(); > > swap = radix_to_swp_entry(page); > @@ -243,7 +244,6 @@ static void force_shm_swapin_readahead(struct > vm_area_struct *vma, > put_page(page); > > rcu_read_lock(); > - xas_reset(&xas); > } > rcu_read_unlock(); > >
diff --git a/mm/madvise.c b/mm/madvise.c index dd1d43cf026d..96189acd6969 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -224,25 +224,28 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma, unsigned long start, unsigned long end, struct address_space *mapping) { - pgoff_t index; + XA_STATE(xas, &mapping->i_pages, linear_page_index(vma, start)); + pgoff_t end_index = end / PAGE_SIZE; struct page *page; - swp_entry_t swap; - for (; start < end; start += PAGE_SIZE) { - index = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; + rcu_read_lock(); + xas_for_each(&xas, page, end_index) { + swp_entry_t swap; - page = find_get_entry(mapping, index); - if (!xa_is_value(page)) { - if (page) - put_page(page); + if (!xa_is_value(page)) continue; - } + rcu_read_unlock(); + swap = radix_to_swp_entry(page); page = read_swap_cache_async(swap, GFP_HIGHUSER_MOVABLE, NULL, 0, false); if (page) put_page(page); + + rcu_read_lock(); + xas_reset(&xas); } + rcu_read_unlock(); lru_add_drain(); /* Push any new pages onto the LRU now */ }