diff mbox

Blocked tasks on 3.15.1

Message ID 53C7CD0F.7070603@fb.com (mailing list archive)
State New, archived
Headers show

Commit Message

Chris Mason July 17, 2014, 1:18 p.m. UTC
[ deadlocks during rsync in 3.15 with compression enabled ]

Hi everyone,

I still haven't been able to reproduce this one here, but I'm going
through a series of tests with lzo compression foraced and every
operation forced to ordered.  Hopefully it'll kick it out soon.

While I'm hammering away, could you please try this patch.  If this is
the buy you're hitting, the deadlock will go away and you'll see this
printk in the log.

thanks!

-chris

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Marc MERLIN July 19, 2014, 12:33 a.m. UTC | #1
On Thu, Jul 17, 2014 at 09:18:07AM -0400, Chris Mason wrote:
> 
> [ deadlocks during rsync in 3.15 with compression enabled ]
> 
> Hi everyone,
> 
> I still haven't been able to reproduce this one here, but I'm going
> through a series of tests with lzo compression foraced and every
> operation forced to ordered.  Hopefully it'll kick it out soon.
> 
> While I'm hammering away, could you please try this patch.  If this is
> the buy you're hitting, the deadlock will go away and you'll see this
> printk in the log.
> 
> thanks!
> 
> -chris
> 
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index 3668048..8ab56df 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -8157,6 +8157,13 @@ void btrfs_destroy_inode(struct inode *inode)
>  		spin_unlock(&root->fs_info->ordered_root_lock);
>  	}
>  
> +	spin_lock(&root->fs_info->ordered_root_lock);
> +	if (!list_empty(&BTRFS_I(inode)->ordered_operations)) {
> +		list_del_init(&BTRFS_I(inode)->ordered_operations);
> +printk(KERN_CRIT "racing inode deletion with ordered operations!!!!!!!!!!!\n");
> +	}
> +	spin_unlock(&root->fs_info->ordered_root_lock);
> +
>  	if (test_bit(BTRFS_INODE_HAS_ORPHAN_ITEM,
>  		     &BTRFS_I(inode)->runtime_flags)) {
>  		btrfs_info(root->fs_info, "inode %llu still on the orphan list",

I've gotten more blocked messages with your patch:


See also the message I sent about memory leaks, and how enabling
kmemleak gets btrfs to deadlock soon after boot relibably.
https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg35568.html
(this was before your patch though)

With your patch (and without kmemleak):

gargamel:/etc/apache2/sites-enabled# ps -eo pid,etime,wchan:30,args |grep df
 3495    01:48:38 btrfs_statfs                   df -hP -x none -x tmpfs -x iso9660 -x udf -x nfs
 4105    07:48:39 btrfs_statfs                   df -hP -x none -x tmpfs -x iso9660 -x udf -x nfs
12639       48:38 btrfs_statfs                   df -hP -x none -x tmpfs -x iso9660 -x udf -x nfs
12691       48:37 btrfs_statfs                   df
14753    06:48:38 btrfs_statfs                   df -hP -x none -x tmpfs -x iso9660 -x udf -x nfs
17214    10:48:39 btrfs_statfs                   df -hP -x none -x tmpfs -x iso9660 -x udf -x nfs
17526    03:48:38 btrfs_statfs                   df -hP -x none -x tmpfs -x iso9660 -x udf -x nfs
18710    09:48:38 btrfs_statfs                   df -hP -x none -x tmpfs -x iso9660 -x udf -x nfs
23668    05:48:38 btrfs_statfs                   df -hP -x none -x tmpfs -x iso9660 -x udf -x nfs
26675    11:37:42 btrfs_statfs                   df .
26828    02:48:38 btrfs_statfs                   df -hP -x none -x tmpfs -x iso9660 -x udf -x nfs
27515    08:48:38 btrfs_statfs                   df -hP -x none -x tmpfs -x iso9660 -x udf -x nfs

sysrq-w does not show me output for those and I cannot understand why.

Howver, I have found that btrfs raid 1 on top of dmcrypt has given me no ends of trouble.
I lost that filesystem twice due to corruption, and now it hangs my machine (strace finds
that df is hanging on that partition).
gargamel:~# btrfs fi df /mnt/btrfs_raid0
Data, RAID1: total=222.00GiB, used=221.61GiB
Data, single: total=8.00MiB, used=0.00
System, RAID1: total=8.00MiB, used=48.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, RAID1: total=2.00GiB, used=1.10GiB
Metadata, single: total=8.00MiB, used=0.00
unknown, single: total=384.00MiB, used=0.00
gargamel:~# btrfs fi show /mnt/btrfs_raid0
Label: 'btrfs_raid0'  uuid: 74279e10-46e7-4ac4-8216-a291819a6691
        Total devices 2 FS bytes used 222.71GiB
        devid    1 size 836.13GiB used 224.03GiB path /dev/dm-3
        devid    2 size 836.13GiB used 224.01GiB path /dev/mapper/raid0d2

Btrfs v3.14.1


This is not encouraging, I think I'm going to stop using raid1 in btrfs :(

I tried sysrq-t, but the output goes faster than my serial console can
capture it, I can't get you a traceback on those df processes.
the dmesg buffer is too small
I already have 
Kernel log buffer size (16 => 64KB, 17 => 128KB) (LOG_BUF_SHIFT) [17] (NEW) 17
and the kernel config does not let me increase it to something more useful like 24.

Btrfs in 3.15 has been no end of troubles for me on my 2 machines, and I can't even capture
useful info when it happens since my long sysrq dumps get truncated and
flow faster than syslog can capture and relay them it seems.

Do you have any suggestions on how to capture that data better?

In the meantime, kernel log when things started hanging is below. the zm processes 
are indeed accessing that raid1 partition.

[67499.502755] INFO: task btrfs-transacti:2867 blocked for more than 120 seconds.
[67499.526860]       Not tainted 3.15.5-amd64-i915-preempt-20140714cm1 #1
[67499.548624] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[67499.575212] btrfs-transacti D 0000000000000001     0  2867      2 0x00000000
[67499.598611]  ffff8802135e7e10 0000000000000046 ffff880118322158 ffff8802135e7fd8
[67499.623218]  ffff8800c96242d0 00000000000141c0 ffff880213496000 ffff880046d1fd40
[67499.647684]  ffff880118322158 ffff880118322100 0000000000000000 ffff8802135e7e20
[67499.672179] Call Trace:
[67499.681567]  [<ffffffff8161fa5e>] schedule+0x73/0x75
[67499.698510]  [<ffffffff8122ceb9>] btrfs_commit_transaction+0x333/0x8b0
[67499.720117]  [<ffffffff8108489d>] ? finish_wait+0x65/0x65
[67499.738331]  [<ffffffff8122966a>] transaction_kthread+0xf8/0x1ab
[67499.758313]  [<ffffffff81229572>] ? btrfs_cleanup_transaction+0x44c/0x44c
[67499.780605]  [<ffffffff8106b564>] kthread+0xae/0xb6
[67499.798238]  [<ffffffff8106b4b6>] ? __kthread_parkme+0x61/0x61
[67499.817609]  [<ffffffff81628d7c>] ret_from_fork+0x7c/0xb0
[67499.835645]  [<ffffffff8106b4b6>] ? __kthread_parkme+0x61/0x61
[67499.856079] INFO: task zma:5225 blocked for more than 120 seconds.
[67499.877451]       Not tainted 3.15.5-amd64-i915-preempt-20140714cm1 #1
[67499.899807] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[67499.926049] zma             D 0000000000000000     0  5225   4863 0x20020084
[67499.950039]  ffff8801f4117bb0 0000000000000082 ffff880211e5b270 ffff8801f4117fd8
[67499.975054]  ffff8801c74c8590 00000000000141c0 ffff880118322100 ffff8800c9e341e8
[67500.000049]  0000000000000000 ffff8800c9e34000 ffff880046d1ff20 ffff8801f4117bc0
[67500.024086] Call Trace:
[67500.033079]  [<ffffffff8161fa5e>] schedule+0x73/0x75
[67500.050477]  [<ffffffff8122c3b4>] wait_current_trans.isra.15+0x98/0xf4
[67500.071684]  [<ffffffff8108489d>] ? finish_wait+0x65/0x65
[67500.089422]  [<ffffffff8122d8ce>] start_transaction+0x498/0x4fc
[67500.108678]  [<ffffffff8122d94d>] btrfs_start_transaction+0x1b/0x1d
[67500.128989]  [<ffffffff81237b3f>] btrfs_create+0x3c/0x1ce
[67500.147460]  [<ffffffff8129a57d>] ? security_inode_permission+0x1c/0x23
[67500.168738]  [<ffffffff81160430>] ? __inode_permission+0x79/0xa4
[67500.188164]  [<ffffffff811616ee>] vfs_create+0x66/0x8c
[67500.204923]  [<ffffffff81162450>] do_last+0x5af/0xa23
[67500.221393]  [<ffffffff81162afb>] path_openat+0x237/0x4de
[67500.238899]  [<ffffffff816225a9>] ? _raw_spin_lock_irqsave+0x21/0x4a
[67500.259953]  [<ffffffff8162223d>] ? _raw_spin_unlock_irqrestore+0x1f/0x32
[67500.281773]  [<ffffffff81163efa>] do_filp_open+0x3a/0x7f
[67500.298935]  [<ffffffff8162220b>] ? _raw_spin_unlock+0x17/0x2a
[67500.317622]  [<ffffffff8116dedd>] ? __alloc_fd+0xea/0xf9
[67500.334714]  [<ffffffff8115648f>] do_sys_open+0x70/0xff
[67500.351512]  [<ffffffff8119691b>] compat_SyS_open+0x1b/0x1d
[67500.369317]  [<ffffffff8162ab2c>] sysenter_dispatch+0x7/0x21
[67500.387936] INFO: task zma:5245 blocked for more than 120 seconds.
[67500.407543]       Not tainted 3.15.5-amd64-i915-preempt-20140714cm1 #1
[67500.428139] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[67500.452610] zma             D ffff88021371eb80     0  5245   4863 0x20020080
[67500.474872]  ffff8800b2fd3bb0 0000000000000086 ffff880211e5b270 ffff8800b2fd3fd8
[67500.498274]  ffff8801889b61d0 00000000000141c0 ffff880118322100 ffff8800c9e341e8
[67500.521624]  0000000000000000 ffff8800c9e34000 ffff880102fce0c0 ffff8800b2fd3bc0
[67500.544987] Call Trace:
[67500.553292]  [<ffffffff8161fa5e>] schedule+0x73/0x75
[67500.569694]  [<ffffffff8122c3b4>] wait_current_trans.isra.15+0x98/0xf4
[67500.590266]  [<ffffffff8108489d>] ? finish_wait+0x65/0x65
[67500.607468]  [<ffffffff8122d8ce>] start_transaction+0x498/0x4fc
[67500.626205]  [<ffffffff8122d94d>] btrfs_start_transaction+0x1b/0x1d
[67500.645984]  [<ffffffff81237b3f>] btrfs_create+0x3c/0x1ce
[67500.663666]  [<ffffffff8129a57d>] ? security_inode_permission+0x1c/0x23
[67500.684852]  [<ffffffff81160430>] ? __inode_permission+0x79/0xa4
[67500.703862]  [<ffffffff811616ee>] vfs_create+0x66/0x8c
[67500.720746]  [<ffffffff81162450>] do_last+0x5af/0xa23
[67500.736856]  [<ffffffff81162afb>] path_openat+0x237/0x4de
[67500.754513]  [<ffffffff816225a9>] ? _raw_spin_lock_irqsave+0x21/0x4a
[67500.775057]  [<ffffffff8162223d>] ? _raw_spin_unlock_irqrestore+0x1f/0x32
[67500.796364]  [<ffffffff81163efa>] do_filp_open+0x3a/0x7f
[67500.813245]  [<ffffffff8162220b>] ? _raw_spin_unlock+0x17/0x2a
[67500.831682]  [<ffffffff8116dedd>] ? __alloc_fd+0xea/0xf9
[67500.848983]  [<ffffffff8115648f>] do_sys_open+0x70/0xff
[67500.866040]  [<ffffffff8119691b>] compat_SyS_open+0x1b/0x1d
[67500.883677]  [<ffffffff8162ab2c>] sysenter_dispatch+0x7/0x21
Marc MERLIN July 19, 2014, 12:44 a.m. UTC | #2
On Fri, Jul 18, 2014 at 05:33:45PM -0700, Marc MERLIN wrote:
> Howver, I have found that btrfs raid 1 on top of dmcrypt has given me no ends of trouble.
> I lost that filesystem twice due to corruption, and now it hangs my machine (strace finds
> that df is hanging on that partition).
> gargamel:~# btrfs fi df /mnt/btrfs_raid0
> Data, RAID1: total=222.00GiB, used=221.61GiB
> Data, single: total=8.00MiB, used=0.00
> System, RAID1: total=8.00MiB, used=48.00KiB
> System, single: total=4.00MiB, used=0.00
> Metadata, RAID1: total=2.00GiB, used=1.10GiB
> Metadata, single: total=8.00MiB, used=0.00
> unknown, single: total=384.00MiB, used=0.00
> gargamel:~# btrfs fi show /mnt/btrfs_raid0
> Label: 'btrfs_raid0'  uuid: 74279e10-46e7-4ac4-8216-a291819a6691
>         Total devices 2 FS bytes used 222.71GiB
>         devid    1 size 836.13GiB used 224.03GiB path /dev/dm-3
>         devid    2 size 836.13GiB used 224.01GiB path /dev/mapper/raid0d2
> 
> Btrfs v3.14.1
> 
> 
> This is not encouraging, I think I'm going to stop using raid1 in btrfs :(

Sorry, this may be a bit misleading. I actually lost 2 filesystems that
were raid0 on top of dmcrypt.
This time it's raid1, and the data isn't lost, but btrfs is tripping all
over itself and taking my whole system apparently because of that
filesystem.

Marc
Marc MERLIN July 19, 2014, 1:58 a.m. UTC | #3
TL;DR: 3.15.5 (or .1 when I tried it) just hang over and over again in
multiple ways on my server.
They also hang on my laptop reliably if I enable kmemleak, but otherwise
my laptop mostly survives with 3.15.x without kmemleak (although it does
deadlock eventually, but that could be after days/weeks, not hours).

I reverted to 3.14 on that machine, and everything is good again.

As a note, this is the 3rd time I try to upgrade this server to 3.15 and
everything goes to crap. I then go back to 3.14 and things work again,
not great since btrfs has never been great and stable for me, but it
works well enough.

On Fri, Jul 18, 2014 at 05:44:57PM -0700, Marc MERLIN wrote:
> On Fri, Jul 18, 2014 at 05:33:45PM -0700, Marc MERLIN wrote:
> > Howver, I have found that btrfs raid 1 on top of dmcrypt has given me no ends of trouble.
> > I lost that filesystem twice due to corruption, and now it hangs my machine (strace finds
> > that df is hanging on that partition).
> > gargamel:~# btrfs fi df /mnt/btrfs_raid0
> > Data, RAID1: total=222.00GiB, used=221.61GiB
> > Data, single: total=8.00MiB, used=0.00
> > System, RAID1: total=8.00MiB, used=48.00KiB
> > System, single: total=4.00MiB, used=0.00
> > Metadata, RAID1: total=2.00GiB, used=1.10GiB
> > Metadata, single: total=8.00MiB, used=0.00
> > unknown, single: total=384.00MiB, used=0.00
> > gargamel:~# btrfs fi show /mnt/btrfs_raid0
> > Label: 'btrfs_raid0'  uuid: 74279e10-46e7-4ac4-8216-a291819a6691
> >         Total devices 2 FS bytes used 222.71GiB
> >         devid    1 size 836.13GiB used 224.03GiB path /dev/dm-3
> >         devid    2 size 836.13GiB used 224.01GiB path /dev/mapper/raid0d2
> > 
> > Btrfs v3.14.1
> > 
> > 
> > This is not encouraging, I think I'm going to stop using raid1 in btrfs :(
> 
> Sorry, this may be a bit misleading. I actually lost 2 filesystems that
> were raid0 on top of dmcrypt.
> This time it's raid1, and the data isn't lost, but btrfs is tripping all
> over itself and taking my whole system apparently because of that
> filesystem.

And just to say that I'm wrong at pinning this down, the same 3.15.5
with your patch locked up on my root filesystem on the next boot

This time sysrq-w worked for a change.
Excerpt:

31933	    03:54 btrfs_file_llseek		 tail -n 50 /var/local/src/misterhouse/data/logs/print.log
31960	    32:54 btrfs_file_llseek		 tail -n 50 /var/local/src/misterhouse/data/logs/print.log
32077	    18:54 btrfs_file_llseek		 tail -n 50 /var/local/src/misterhouse/data/logs/print.log

[ 2176.230211] tail	       D ffff8801b3a567c0     0 25396  22031 0x20020080
[ 2176.252788]	ffff88006fed3e20 0000000000000082 00000000000000a8 ffff88006fed3fd8
[ 2176.276039]	ffff8801a542a3d0 00000000000141c0 ffff88020c374e10 ffff88020c374e14
[ 2176.299273]	ffff8801a542a3d0 ffff88020c374e18 00000000ffffffff ffff88006fed3e30
[ 2176.322515] Call Trace:
[ 2176.330739]	[<ffffffff8161fa5e>] schedule+0x73/0x75
[ 2176.346527]	[<ffffffff8161fd1f>] schedule_preempt_disabled+0x18/0x24
[ 2176.367208]	[<ffffffff81620e42>] __mutex_lock_slowpath+0x160/0x1d7
[ 2176.386946]	[<ffffffff81620ed0>] mutex_lock+0x17/0x27
[ 2176.403727]	[<ffffffff8123a33a>] btrfs_file_llseek+0x40/0x205
[ 2176.422603]	[<ffffffff810be59a>] ? from_kgid_munged+0x12/0x1e
[ 2176.441015]	[<ffffffff810482f1>] ? cp_stat64+0x50/0x20b
[ 2176.457841]	[<ffffffff81156627>] vfs_llseek+0x2e/0x30
[ 2176.474606]	[<ffffffff81156c32>] SyS_llseek+0x5b/0xaa
[ 2176.490895]	[<ffffffff8162ab2c>] sysenter_dispatch+0x7/0x21

Full log:
http://marc.merlins.org/tmp/btrfs_hang3.txt

After reboot, it's now hanging on this:
[  362.811392] INFO: task kworker/u8:0:6 blocked for more than 120 seconds.
[  362.831717]       Not tainted 3.15.5-amd64-i915-preempt-20140714cm1 #1
[  362.851516] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  362.875213] kworker/u8:0    D ffff88021265a800     0     6      2 0x00000000
[  362.896672] Workqueue: btrfs-flush_delalloc normal_work_helper
[  362.914260]  ffff8802148cbb60 0000000000000046 ffff8802148cbb30 ffff8802148cbfd8
[  362.936741]  ffff8802148c4150 00000000000141c0 ffff88021f3941c0 ffff8802148c4150
[  362.959195]  ffff8802148cbc00 0000000000000002 ffffffff810fdda8 ffff8802148cbb70
[  362.981602] Call Trace:
[  362.988972]  [<ffffffff810fdda8>] ? wait_on_page_read+0x3c/0x3c
[  363.006769]  [<ffffffff8161fa5e>] schedule+0x73/0x75
[  363.021704]  [<ffffffff8161fc03>] io_schedule+0x60/0x7a
[  363.037414]  [<ffffffff810fddb6>] sleep_on_page+0xe/0x12
[  363.053416]  [<ffffffff8161ff93>] __wait_on_bit_lock+0x46/0x8a
[  363.070980]  [<ffffffff810fde71>] __lock_page+0x69/0x6b
[  363.086722]  [<ffffffff810848d1>] ? autoremove_wake_function+0x34/0x34
[  363.106373]  [<ffffffff81242ab0>] lock_page+0x1e/0x21
[  363.121585]  [<ffffffff812465bb>] extent_write_cache_pages.isra.16.constprop.32+0x10e/0x2c6
[  363.148103]  [<ffffffff81246a19>] extent_writepages+0x4b/0x5c
[  363.166792]  [<ffffffff81230ce4>] ? btrfs_submit_direct+0x3f4/0x3f4
[  363.187074]  [<ffffffff810765ec>] ? get_parent_ip+0xc/0x3c
[  363.204975]  [<ffffffff8122f3fc>] btrfs_writepages+0x28/0x2a
[  363.223367]  [<ffffffff8110873d>] do_writepages+0x1e/0x2c
[  363.240980]  [<ffffffff810ff507>] __filemap_fdatawrite_range+0x55/0x57
[  363.261985]  [<ffffffff810fff50>] filemap_flush+0x1c/0x1e
[  363.279628]  [<ffffffff81231921>] btrfs_run_delalloc_work+0x32/0x69
[  363.299893]  [<ffffffff81252438>] normal_work_helper+0xfe/0x240
[  363.319143]  [<ffffffff81065e29>] process_one_work+0x195/0x2d2
[  363.338123]  [<ffffffff810660cb>] worker_thread+0x136/0x205
[  363.356348]  [<ffffffff81065f95>] ? process_scheduled_works+0x2f/0x2f
[  363.377203]  [<ffffffff8106b564>] kthread+0xae/0xb6
[  363.393396]  [<ffffffff8106b4b6>] ? __kthread_parkme+0x61/0x61
[  363.412469]  [<ffffffff81628d7c>] ret_from_fork+0x7c/0xb0
[  363.430228]  [<ffffffff8106b4b6>] ? __kthread_parkme+0x61/0x61

In the end, I went back to 3.14, and things work again.

Marc
Chris Samuel July 19, 2014, 1:59 a.m. UTC | #4
On Fri, 18 Jul 2014 05:44:57 PM Marc MERLIN wrote:

> Sorry, this may be a bit misleading. I actually lost 2 filesystems that
> were raid0 on top of dmcrypt.

Stupid question I know, but does this happen without dmcrypt?

cheers,
Chris
Marc MERLIN July 19, 2014, 5:40 a.m. UTC | #5
On Sat, Jul 19, 2014 at 11:59:24AM +1000, Chris Samuel wrote:
> On Fri, 18 Jul 2014 05:44:57 PM Marc MERLIN wrote:
> 
> > Sorry, this may be a bit misleading. I actually lost 2 filesystems that
> > were raid0 on top of dmcrypt.
> 
> Stupid question I know, but does this happen without dmcrypt?

It's not a stupid question: I don't use btrfs without dmcrypt, so I can't
say.
(and I'm not interested in trying :) 

Marc
Cody P Schafer July 19, 2014, 5:38 p.m. UTC | #6
On Thu, Jul 17, 2014 at 8:18 AM, Chris Mason <clm@fb.com> wrote:
>
> [ deadlocks during rsync in 3.15 with compression enabled ]
>
> Hi everyone,
>
> I still haven't been able to reproduce this one here, but I'm going
> through a series of tests with lzo compression foraced and every
> operation forced to ordered.  Hopefully it'll kick it out soon.
>
> While I'm hammering away, could you please try this patch.  If this is
> the buy you're hitting, the deadlock will go away and you'll see this
> printk in the log.
>
> thanks!
>
> -chris
>
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index 3668048..8ab56df 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -8157,6 +8157,13 @@ void btrfs_destroy_inode(struct inode *inode)
>                 spin_unlock(&root->fs_info->ordered_root_lock);
>         }
>
> +       spin_lock(&root->fs_info->ordered_root_lock);
> +       if (!list_empty(&BTRFS_I(inode)->ordered_operations)) {
> +               list_del_init(&BTRFS_I(inode)->ordered_operations);
> +printk(KERN_CRIT "racing inode deletion with ordered operations!!!!!!!!!!!\n");
> +       }
> +       spin_unlock(&root->fs_info->ordered_root_lock);
> +
>         if (test_bit(BTRFS_INODE_HAS_ORPHAN_ITEM,
>                      &BTRFS_I(inode)->runtime_flags)) {
>                 btrfs_info(root->fs_info, "inode %llu still on the orphan list",

Thanks Chris.

Running 3.15.6 with this patch applied on top:
 - still causes a hang with `rsync -hPaHAXx --del /mnt/home/nyx/ /home/nyx/`
 - no extra error messages printed (`dmesg | grep racing`) compared to
without the patch

To recap some details (so I can have it all in one place):
 - /home/ is btrfs with compress=lzo
 - /mnt/home is btrfs with no compression enabled.
 - I have _not_ created any nodatacow files.
 - Both filesystems are on different physical disks.
 - Full stack is: sata <-> dmcrypt <-> lvm <-> btrfs (I noticed others
mentioning the use of dmcrypt)
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Martin Steigerwald July 19, 2014, 6:23 p.m. UTC | #7
Am Samstag, 19. Juli 2014, 12:38:53 schrieb Cody P Schafer:
> On Thu, Jul 17, 2014 at 8:18 AM, Chris Mason <clm@fb.com> wrote:
> > [ deadlocks during rsync in 3.15 with compression enabled ]
> > 
> > Hi everyone,
> > 
> > I still haven't been able to reproduce this one here, but I'm going
> > through a series of tests with lzo compression foraced and every
> > operation forced to ordered.  Hopefully it'll kick it out soon.
> > 
> > While I'm hammering away, could you please try this patch.  If this is
> > the buy you're hitting, the deadlock will go away and you'll see this
> > printk in the log.
> > 
> > thanks!
> > 
> > -chris
> > 
> > diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> > index 3668048..8ab56df 100644
> > --- a/fs/btrfs/inode.c
> > +++ b/fs/btrfs/inode.c
> > @@ -8157,6 +8157,13 @@ void btrfs_destroy_inode(struct inode *inode)
> > 
> >                 spin_unlock(&root->fs_info->ordered_root_lock);
> >         
> >         }
> > 
> > +       spin_lock(&root->fs_info->ordered_root_lock);
> > +       if (!list_empty(&BTRFS_I(inode)->ordered_operations)) {
> > +               list_del_init(&BTRFS_I(inode)->ordered_operations);
> > +printk(KERN_CRIT "racing inode deletion with ordered
> > operations!!!!!!!!!!!\n"); +       }
> > +       spin_unlock(&root->fs_info->ordered_root_lock);
> > +
> > 
> >         if (test_bit(BTRFS_INODE_HAS_ORPHAN_ITEM,
> >         
> >                      &BTRFS_I(inode)->runtime_flags)) {
> >                 
> >                 btrfs_info(root->fs_info, "inode %llu still on the orphan
> >                 list",
> 
> Thanks Chris.
> 
> Running 3.15.6 with this patch applied on top:
>  - still causes a hang with `rsync -hPaHAXx --del /mnt/home/nyx/ /home/nyx/`
> - no extra error messages printed (`dmesg | grep racing`) compared to
> without the patch

I got same results with 3.16-rc5 + this patch (see thread BTRFS hang with 
3.16-rc5). 3.16-rc4 still is fine with me. No hang whatsoever so far.

> To recap some details (so I can have it all in one place):
>  - /home/ is btrfs with compress=lzo

BTRFS RAID 1 with lzo.

>  - I have _not_ created any nodatacow files.

Me neither.

>  - Full stack is: sata <-> dmcrypt <-> lvm <-> btrfs (I noticed others
> mentioning the use of dmcrypt)

Same, except no dmcrypt.
Chris Mason July 22, 2014, 2:53 p.m. UTC | #8
On 07/19/2014 02:23 PM, Martin Steigerwald wrote:

>> Running 3.15.6 with this patch applied on top:
>>  - still causes a hang with `rsync -hPaHAXx --del /mnt/home/nyx/ /home/nyx/`
>> - no extra error messages printed (`dmesg | grep racing`) compared to
>> without the patch
> 
> I got same results with 3.16-rc5 + this patch (see thread BTRFS hang with 
> 3.16-rc5). 3.16-rc4 still is fine with me. No hang whatsoever so far.
> 
>> To recap some details (so I can have it all in one place):
>>  - /home/ is btrfs with compress=lzo
> 
> BTRFS RAID 1 with lzo.
> 
>>  - I have _not_ created any nodatacow files.
> 
> Me neither.
> 
>>  - Full stack is: sata <-> dmcrypt <-> lvm <-> btrfs (I noticed others
>> mentioning the use of dmcrypt)
> 
> Same, except no dmcrypt.
> 

Thanks for the help in tracking this down everyone.  We'll get there!
Are you all running multi-disk systems (from a btrfs POV, more than one
device?)  I don't care how many physical drives this maps to, just does
btrfs think there's more than one drive.

-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Torbjørn July 22, 2014, 3:14 p.m. UTC | #9
On 07/22/2014 04:53 PM, Chris Mason wrote:
>
> On 07/19/2014 02:23 PM, Martin Steigerwald wrote:
>
>>> Running 3.15.6 with this patch applied on top:
>>>   - still causes a hang with `rsync -hPaHAXx --del /mnt/home/nyx/ /home/nyx/`
>>> - no extra error messages printed (`dmesg | grep racing`) compared to
>>> without the patch
>> I got same results with 3.16-rc5 + this patch (see thread BTRFS hang with
>> 3.16-rc5). 3.16-rc4 still is fine with me. No hang whatsoever so far.
>>
>>> To recap some details (so I can have it all in one place):
>>>   - /home/ is btrfs with compress=lzo
>> BTRFS RAID 1 with lzo.
>>
>>>   - I have _not_ created any nodatacow files.
>> Me neither.
>>
>>>   - Full stack is: sata <-> dmcrypt <-> lvm <-> btrfs (I noticed others
>>> mentioning the use of dmcrypt)
>> Same, except no dmcrypt.
>>
> Thanks for the help in tracking this down everyone.  We'll get there!
> Are you all running multi-disk systems (from a btrfs POV, more than one
> device?)  I don't care how many physical drives this maps to, just does
> btrfs think there's more than one drive.
>
> -chris
Hi,

In case it's interesting:
 From an earlier email thread with subject: 3.15-rc6 - 
btrfs-transacti:4157 blocked for more than 120

TLDR: yes, btrfs sees multiple devices.

sata <-> dmcrypt <-> btrfs raid10
btrfs raid10 consist of multiple dmcrypt devices from multiple sata devices.

Mount: /dev/mapper/sdu on /mnt/storage type btrfs 
(rw,noatime,space_cache,compress=lzo,inode_cache,subvol=storage)
(yes I know inode_cache is not recommended for general use)

I have a nocow directory in a separate subvolume containing vm-images 
used by kvm.
The same kvm-vms are reading/writing data from that array over nfs.

I'm still holding that system on 3.14. Anything above causes blocks.

--
Torbjørn
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Marc MERLIN July 22, 2014, 4:46 p.m. UTC | #10
On Tue, Jul 22, 2014 at 10:53:03AM -0400, Chris Mason wrote:
> Thanks for the help in tracking this down everyone.  We'll get there!
> Are you all running multi-disk systems (from a btrfs POV, more than one
> device?)  I don't care how many physical drives this maps to, just does
> btrfs think there's more than one drive.

In the bugs I sent you, it was a mix of arrays that were
mdraid / dmcrypt / btrfs

I have also one array with:
disk1 / dmcrypt \
                 - btrfs (2 drives visible by btrfs)
disk2 / dmcrypt /

The multidrive setup seemed a bit worse, I just destroyed it and went
back to putting all the drives together with mdadm and showing a single
dmcrypted device to btrfs.

But that is still super unstable on my server with 3.15, while being
somewhat usable with my laptop (it still hangs, but more rarely)
The one difference is that my laptop actually does
disk > dmcrypt > btrfs
while my server does 
disks > mdadm > dmcrypt > btrfs

Marc
Torbjørn July 22, 2014, 7:42 p.m. UTC | #11
On 07/22/2014 04:53 PM, Chris Mason wrote:
>
> On 07/19/2014 02:23 PM, Martin Steigerwald wrote:
>
>>> Running 3.15.6 with this patch applied on top:
>>>   - still causes a hang with `rsync -hPaHAXx --del /mnt/home/nyx/ /home/nyx/`
>>> - no extra error messages printed (`dmesg | grep racing`) compared to
>>> without the patch
>> I got same results with 3.16-rc5 + this patch (see thread BTRFS hang with
>> 3.16-rc5). 3.16-rc4 still is fine with me. No hang whatsoever so far.
>>
>>> To recap some details (so I can have it all in one place):
>>>   - /home/ is btrfs with compress=lzo
>> BTRFS RAID 1 with lzo.
>>
>>>   - I have _not_ created any nodatacow files.
>> Me neither.
>>
>>>   - Full stack is: sata <-> dmcrypt <-> lvm <-> btrfs (I noticed others
>>> mentioning the use of dmcrypt)
>> Same, except no dmcrypt.
>>
> Thanks for the help in tracking this down everyone.  We'll get there!
> Are you all running multi-disk systems (from a btrfs POV, more than one
> device?)  I don't care how many physical drives this maps to, just does
> btrfs think there's more than one drive.
>
> -chris
3.16-rc6 with your patch on top still causes hangs here.
No traces of "racing" in dmesg
Hang is on a btrfs raid 0 consisting of 3 drives.
Full stack is: sata <-> dmcrypt <-> btrfs raid0

Hang was caused by
1. Several rsync -av --inplace --delete <source> <backup subvol>
2. btrfs subvolume snapshot -r <backup subvol> <bacup snap>

The rsync jobs are done one at a time
btrfs is stuck when trying to create the read only snapshot

--
Torbjørn

All output via netconsole.
sysrq-w: 
https://gist.github.com/anonymous/d1837187e261f9a4cbd2#file-gistfile1-txt
sysrq-t: 
https://gist.github.com/anonymous/2bdb73f035ab9918c63d#file-gistfile1-txt

dmesg:
[ 9352.784136] INFO: task btrfs-transacti:3874 blocked for more than 120 
seconds.
[ 9352.784222]       Tainted: G            E 3.16.0-rc6+ #64
[ 9352.784270] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
[ 9352.784354] btrfs-transacti D ffff88042fc943c0     0  3874      2 
0x00000000
[ 9352.784413]  ffff8803fb9dfca0 0000000000000002 ffff8800c4214b90 
ffff8803fb9dffd8
[ 9352.784502]  00000000000143c0 00000000000143c0 ffff88041977b260 
ffff8803d29f23a0
[ 9352.784592]  ffff8803d29f23a8 7fffffffffffffff ffff8800c4214b90 
ffff880232e2c0a8
[ 9352.784682] Call Trace:
[ 9352.784726]  [<ffffffff8170eb59>] schedule+0x29/0x70
[ 9352.784774]  [<ffffffff8170df99>] schedule_timeout+0x209/0x280
[ 9352.784827]  [<ffffffff8170874b>] ? __slab_free+0xfe/0x2c3
[ 9352.784879]  [<ffffffff810829f4>] ? wake_up_worker+0x24/0x30
[ 9352.784929]  [<ffffffff8170f656>] wait_for_completion+0xa6/0x160
[ 9352.784981]  [<ffffffff8109d4e0>] ? wake_up_state+0x20/0x20
[ 9352.785049]  [<ffffffffc045b936>] 
btrfs_wait_and_free_delalloc_work+0x16/0x30 [btrfs]
[ 9352.785141]  [<ffffffffc04658be>] 
btrfs_run_ordered_operations+0x1ee/0x2c0 [btrfs]
[ 9352.785260]  [<ffffffffc044bbb7>] btrfs_commit_transaction+0x27/0xa40 
[btrfs]
[ 9352.785324]  [<ffffffffc0447d65>] transaction_kthread+0x1b5/0x240 [btrfs]
[ 9352.785385]  [<ffffffffc0447bb0>] ? 
btrfs_cleanup_transaction+0x560/0x560 [btrfs]
[ 9352.785469]  [<ffffffff8108cc52>] kthread+0xd2/0xf0
[ 9352.785517]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9352.785571]  [<ffffffff81712dfc>] ret_from_fork+0x7c/0xb0
[ 9352.785620]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9352.785678] INFO: task kworker/u16:3:6932 blocked for more than 120 
seconds.
[ 9352.785732]       Tainted: G            E 3.16.0-rc6+ #64
[ 9352.785780] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
[ 9352.785863] kworker/u16:3   D ffff88042fd943c0     0  6932      2 
0x00000000
[ 9352.785930] Workqueue: btrfs-flush_delalloc normal_work_helper [btrfs]
[ 9352.785983]  ffff88035f1bbb58 0000000000000002 ffff880417e564c0 
ffff88035f1bbfd8
[ 9352.786072]  00000000000143c0 00000000000143c0 ffff8800c1a03260 
ffff88042fd94cd8
[ 9352.786160]  ffff88042ffb4be8 ffff88035f1bbbe0 0000000000000002 
ffffffff81159930
[ 9352.786250] Call Trace:
[ 9352.786292]  [<ffffffff81159930>] ? wait_on_page_read+0x60/0x60
[ 9352.786343]  [<ffffffff8170ee6d>] io_schedule+0x9d/0x130
[ 9352.786393]  [<ffffffff8115993e>] sleep_on_page+0xe/0x20
[ 9352.786443]  [<ffffffff8170f3e8>] __wait_on_bit_lock+0x48/0xb0
[ 9352.786495]  [<ffffffff81159a4a>] __lock_page+0x6a/0x70
[ 9352.786544]  [<ffffffff810b14a0>] ? autoremove_wake_function+0x40/0x40
[ 9352.786607]  [<ffffffffc046711e>] ? flush_write_bio+0xe/0x10 [btrfs]
[ 9352.786669]  [<ffffffffc046b0c0>] 
extent_write_cache_pages.isra.28.constprop.46+0x3d0/0x3f0 [btrfs]
[ 9352.786766]  [<ffffffffc046cd2d>] extent_writepages+0x4d/0x70 [btrfs]
[ 9352.786828]  [<ffffffffc04506f0>] ? btrfs_submit_direct+0x6a0/0x6a0 
[btrfs]
[ 9352.786883]  [<ffffffff810b0d78>] ? __wake_up_common+0x58/0x90
[ 9352.786943]  [<ffffffffc044e1d8>] btrfs_writepages+0x28/0x30 [btrfs]
[ 9352.786997]  [<ffffffff811668ee>] do_writepages+0x1e/0x40
[ 9352.787045]  [<ffffffff8115b409>] __filemap_fdatawrite_range+0x59/0x60
[ 9352.787097]  [<ffffffff8115b4bc>] filemap_flush+0x1c/0x20
[ 9352.787151]  [<ffffffffc045158a>] btrfs_run_delalloc_work+0x5a/0xa0 
[btrfs]
[ 9352.787211]  [<ffffffffc047accf>] normal_work_helper+0x11f/0x2b0 [btrfs]
[ 9352.787266]  [<ffffffff810858d2>] process_one_work+0x182/0x450
[ 9352.787317]  [<ffffffff81086043>] worker_thread+0x123/0x5a0
[ 9352.787464]  [<ffffffff81085f20>] ? rescuer_thread+0x380/0x380
[ 9352.787515]  [<ffffffff8108cc52>] kthread+0xd2/0xf0
[ 9352.787563]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9352.787616]  [<ffffffff81712dfc>] ret_from_fork+0x7c/0xb0
[ 9352.787666]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9352.787744] INFO: task kworker/u16:9:8360 blocked for more than 120 
seconds.
[ 9352.787799]       Tainted: G            E 3.16.0-rc6+ #64
[ 9352.787847] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
[ 9352.787949] kworker/u16:9   D ffff88042fd543c0     0  8360      2 
0x00000000
[ 9352.788015] Workqueue: btrfs-delalloc normal_work_helper [btrfs]
[ 9352.788139]  ffff88035f20f838 0000000000000002 ffff8803216c1930 
ffff88035f20ffd8
[ 9352.788229]  00000000000143c0 00000000000143c0 ffff8804157e0000 
ffff88042fd54cd8
[ 9352.788317]  ffff88042ffad6e8 ffff88035f20f8c0 0000000000000002 
ffffffff81159930
[ 9352.788406] Call Trace:
[ 9352.788445]  [<ffffffff81159930>] ? wait_on_page_read+0x60/0x60
[ 9352.788495]  [<ffffffff8170ee6d>] io_schedule+0x9d/0x130
[ 9352.788543]  [<ffffffff8115993e>] sleep_on_page+0xe/0x20
[ 9352.788591]  [<ffffffff8170f3e8>] __wait_on_bit_lock+0x48/0xb0
[ 9352.788642]  [<ffffffff8133c8a4>] ? blk_finish_plug+0x14/0x40
[ 9352.788692]  [<ffffffff81159a4a>] __lock_page+0x6a/0x70
[ 9352.788740]  [<ffffffff810b14a0>] ? autoremove_wake_function+0x40/0x40
[ 9352.788792]  [<ffffffff8115ac34>] pagecache_get_page+0x164/0x1f0
[ 9352.788849]  [<ffffffffc048e447>] io_ctl_prepare_pages+0x67/0x180 [btrfs]
[ 9352.788933]  [<ffffffffc048fd0f>] __load_free_space_cache+0x1ff/0x700 
[btrfs]
[ 9352.788998]  [<ffffffffc049030c>] load_free_space_cache+0xfc/0x1c0 
[btrfs]
[ 9352.789059]  [<ffffffffc0430292>] cache_block_group+0x192/0x390 [btrfs]
[ 9352.789113]  [<ffffffff810b1460>] ? prepare_to_wait_event+0x100/0x100
[ 9352.789173]  [<ffffffffc04390a2>] find_free_extent+0xc72/0xcb0 [btrfs]
[ 9352.789233]  [<ffffffffc04392ef>] btrfs_reserve_extent+0xaf/0x1b0 [btrfs]
[ 9352.789295]  [<ffffffffc04526cc>] cow_file_range+0x13c/0x430 [btrfs]
[ 9352.789355]  [<ffffffffc0453c80>] ? 
submit_compressed_extents+0x480/0x480 [btrfs]
[ 9352.789444]  [<ffffffffc04539d2>] 
submit_compressed_extents+0x1d2/0x480 [btrfs]
[ 9352.789533]  [<ffffffffc0451524>] ? async_cow_free+0x24/0x30 [btrfs]
[ 9352.789592]  [<ffffffffc0453c80>] ? 
submit_compressed_extents+0x480/0x480 [btrfs]
[ 9352.789680]  [<ffffffffc0453d06>] async_cow_submit+0x86/0x90 [btrfs]
[ 9352.789737]  [<ffffffffc047ad43>] normal_work_helper+0x193/0x2b0 [btrfs]
[ 9352.789790]  [<ffffffff810858d2>] process_one_work+0x182/0x450
[ 9352.789840]  [<ffffffff81086043>] worker_thread+0x123/0x5a0
[ 9352.789889]  [<ffffffff81085f20>] ? rescuer_thread+0x380/0x380
[ 9352.789938]  [<ffffffff8108cc52>] kthread+0xd2/0xf0
[ 9352.789984]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9352.790038]  [<ffffffff81712dfc>] ret_from_fork+0x7c/0xb0
[ 9352.790088]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9352.790144] INFO: task kworker/u16:0:11682 blocked for more than 120 
seconds.
[ 9352.790199]       Tainted: G            E 3.16.0-rc6+ #64
[ 9352.795261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
[ 9352.795347] kworker/u16:0   D ffff88042fd543c0     0 11682      2 
0x00000000
[ 9352.795409] Workqueue: writeback bdi_writeback_workfn (flush-btrfs-3)
[ 9352.795465]  ffff8802161cf9d8 0000000000000002 ffff8803fc3c0000 
ffff8802161cffd8
[ 9352.795554]  00000000000143c0 00000000000143c0 ffff8804197e0000 
ffff88042fd54cd8
[ 9352.795644]  ffff88042ffaa9e8 ffff8802161cfa60 0000000000000002 
ffffffff81159930
[ 9352.795734] Call Trace:
[ 9352.795776]  [<ffffffff81159930>] ? wait_on_page_read+0x60/0x60
[ 9352.795829]  [<ffffffff8170ee6d>] io_schedule+0x9d/0x130
[ 9352.795879]  [<ffffffff8115993e>] sleep_on_page+0xe/0x20
[ 9352.795928]  [<ffffffff8170f3e8>] __wait_on_bit_lock+0x48/0xb0
[ 9352.795979]  [<ffffffff81159a4a>] __lock_page+0x6a/0x70
[ 9352.796029]  [<ffffffff810b14a0>] ? autoremove_wake_function+0x40/0x40
[ 9352.796150]  [<ffffffffc046711e>] ? flush_write_bio+0xe/0x10 [btrfs]
[ 9352.796212]  [<ffffffffc046b0c0>] 
extent_write_cache_pages.isra.28.constprop.46+0x3d0/0x3f0 [btrfs]
[ 9352.796309]  [<ffffffffc046cd2d>] extent_writepages+0x4d/0x70 [btrfs]
[ 9352.796369]  [<ffffffffc04506f0>] ? btrfs_submit_direct+0x6a0/0x6a0 
[btrfs]
[ 9352.796428]  [<ffffffffc044e1d8>] btrfs_writepages+0x28/0x30 [btrfs]
[ 9352.796481]  [<ffffffff811668ee>] do_writepages+0x1e/0x40
[ 9352.796530]  [<ffffffff811f17d0>] __writeback_single_inode+0x40/0x210
[ 9352.796581]  [<ffffffff811f22b7>] writeback_sb_inodes+0x247/0x3e0
[ 9352.796632]  [<ffffffff811f24ef>] __writeback_inodes_wb+0x9f/0xd0
[ 9352.796682]  [<ffffffff811f2763>] wb_writeback+0x243/0x2c0
[ 9352.796731]  [<ffffffff811f4e09>] bdi_writeback_workfn+0x1b9/0x430
[ 9352.796784]  [<ffffffff810858d2>] process_one_work+0x182/0x450
[ 9352.796833]  [<ffffffff81086043>] worker_thread+0x123/0x5a0
[ 9352.796882]  [<ffffffff81085f20>] ? rescuer_thread+0x380/0x380
[ 9352.796933]  [<ffffffff8108cc52>] kthread+0xd2/0xf0
[ 9352.796979]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9352.797031]  [<ffffffff81712dfc>] ret_from_fork+0x7c/0xb0
[ 9352.797079]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9352.797134] INFO: task kworker/u16:4:13743 blocked for more than 120 
seconds.
[ 9352.797189]       Tainted: G            E 3.16.0-rc6+ #64
[ 9352.797299] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
[ 9352.797383] kworker/u16:4   D ffff88042fc943c0     0 13743      2 
0x00000000
[ 9352.797451] Workqueue: btrfs-flush_delalloc normal_work_helper [btrfs]
[ 9352.797505]  ffff880139843b58 0000000000000002 ffff880191e38000 
ffff880139843fd8
[ 9352.797594]  00000000000143c0 00000000000143c0 ffff88041977b260 
ffff88042fc94cd8
[ 9352.797684]  ffff88042ffb4be8 ffff880139843be0 0000000000000002 
ffffffff81159930
[ 9352.797773] Call Trace:
[ 9352.797813]  [<ffffffff81159930>] ? wait_on_page_read+0x60/0x60
[ 9352.797865]  [<ffffffff8170ee6d>] io_schedule+0x9d/0x130
[ 9352.797914]  [<ffffffff8115993e>] sleep_on_page+0xe/0x20
[ 9352.797963]  [<ffffffff8170f3e8>] __wait_on_bit_lock+0x48/0xb0
[ 9352.798013]  [<ffffffff81159a4a>] __lock_page+0x6a/0x70
[ 9352.798060]  [<ffffffff810b14a0>] ? autoremove_wake_function+0x40/0x40
[ 9352.798119]  [<ffffffffc046711e>] ? flush_write_bio+0xe/0x10 [btrfs]
[ 9352.798176]  [<ffffffffc046b0c0>] 
extent_write_cache_pages.isra.28.constprop.46+0x3d0/0x3f0 [btrfs]
[ 9352.798269]  [<ffffffffc046cd2d>] extent_writepages+0x4d/0x70 [btrfs]
[ 9352.798326]  [<ffffffffc04506f0>] ? btrfs_submit_direct+0x6a0/0x6a0 
[btrfs]
[ 9352.798385]  [<ffffffffc044e1d8>] btrfs_writepages+0x28/0x30 [btrfs]
[ 9352.798437]  [<ffffffff811668ee>] do_writepages+0x1e/0x40
[ 9352.798485]  [<ffffffff8115b409>] __filemap_fdatawrite_range+0x59/0x60
[ 9352.798537]  [<ffffffff8115b4bc>] filemap_flush+0x1c/0x20
[ 9352.798590]  [<ffffffffc045158a>] btrfs_run_delalloc_work+0x5a/0xa0 
[btrfs]
[ 9352.798650]  [<ffffffffc047accf>] normal_work_helper+0x11f/0x2b0 [btrfs]
[ 9352.798703]  [<ffffffff810858d2>] process_one_work+0x182/0x450
[ 9352.798752]  [<ffffffff81086043>] worker_thread+0x123/0x5a0
[ 9352.798801]  [<ffffffff81085f20>] ? rescuer_thread+0x380/0x380
[ 9352.798851]  [<ffffffff8108cc52>] kthread+0xd2/0xf0
[ 9352.798897]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9352.798949]  [<ffffffff81712dfc>] ret_from_fork+0x7c/0xb0
[ 9352.798997]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9352.799049] INFO: task btrfs:13823 blocked for more than 120 seconds.
[ 9352.799101]       Tainted: G            E 3.16.0-rc6+ #64
[ 9352.799148] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
[ 9352.799229] btrfs           D ffff88042fd943c0     0 13823  13461 
0x00000000
[ 9352.799286]  ffff880177203a80 0000000000000002 ffff8800c1a01930 
ffff880177203fd8
[ 9352.799375]  00000000000143c0 00000000000143c0 ffff8800c2f664c0 
ffff8803d29f2438
[ 9352.799463]  ffff8803d29f2440 7fffffffffffffff ffff8800c1a01930 
ffff880177203bc0
[ 9352.799552] Call Trace:
[ 9352.799590]  [<ffffffff8170eb59>] schedule+0x29/0x70
[ 9352.799637]  [<ffffffff8170df99>] schedule_timeout+0x209/0x280
[ 9352.799687]  [<ffffffff8109d4a3>] ? wake_up_process+0x23/0x40
[ 9352.799736]  [<ffffffff810829f4>] ? wake_up_worker+0x24/0x30
[ 9352.799786]  [<ffffffff810836ab>] ? insert_work+0x6b/0xb0
[ 9352.799877]  [<ffffffff8170f656>] wait_for_completion+0xa6/0x160
[ 9352.799928]  [<ffffffff8109d4e0>] ? wake_up_state+0x20/0x20
[ 9352.799989]  [<ffffffffc045b936>] 
btrfs_wait_and_free_delalloc_work+0x16/0x30 [btrfs]
[ 9352.800080]  [<ffffffffc045bafe>] __start_delalloc_inodes+0x1ae/0x2a0 
[btrfs]
[ 9352.800150]  [<ffffffffc045bc2e>] 
btrfs_start_delalloc_inodes+0x3e/0x120 [btrfs]
[ 9352.800235]  [<ffffffff810b1228>] ? finish_wait+0x58/0x70
[ 9352.800293]  [<ffffffffc047fa24>] btrfs_mksubvol.isra.29+0x1d4/0x570 
[btrfs]
[ 9352.800348]  [<ffffffff810b1460>] ? prepare_to_wait_event+0x100/0x100
[ 9352.800409]  [<ffffffffc047ff46>] 
btrfs_ioctl_snap_create_transid+0x186/0x190 [btrfs]
[ 9352.800501]  [<ffffffffc04800ab>] 
btrfs_ioctl_snap_create_v2+0xeb/0x130 [btrfs]
[ 9352.800592]  [<ffffffffc048218f>] btrfs_ioctl+0xcaf/0x2ae0 [btrfs]
[ 9352.800645]  [<ffffffff8118d933>] ? mmap_region+0x163/0x5d0
[ 9352.800696]  [<ffffffff81057c8c>] ? __do_page_fault+0x20c/0x550
[ 9352.800748]  [<ffffffff811dc840>] do_vfs_ioctl+0x2e0/0x4c0
[ 9352.800798]  [<ffffffff810a0e64>] ? vtime_account_user+0x54/0x60
[ 9352.800850]  [<ffffffff811dcaa1>] SyS_ioctl+0x81/0xa0
[ 9352.800899]  [<ffffffff817130bf>] tracesys+0xe1/0xe6
[ 9354.863323] kvm [4191]: vcpu0 unhandled rdmsr: 0x345
[ 9354.869074] kvm_set_msr_common: 118 callbacks suppressed
[ 9354.869130] kvm [4191]: vcpu0 unhandled wrmsr: 0x680 data 0
[ 9354.869185] kvm [4191]: vcpu0 unhandled wrmsr: 0x6c0 data 0
[ 9354.869240] kvm [4191]: vcpu0 unhandled wrmsr: 0x681 data 0
[ 9354.869294] kvm [4191]: vcpu0 unhandled wrmsr: 0x6c1 data 0
[ 9354.869346] kvm [4191]: vcpu0 unhandled wrmsr: 0x682 data 0
[ 9354.869397] kvm [4191]: vcpu0 unhandled wrmsr: 0x6c2 data 0
[ 9354.869449] kvm [4191]: vcpu0 unhandled wrmsr: 0x683 data 0
[ 9354.869502] kvm [4191]: vcpu0 unhandled wrmsr: 0x6c3 data 0
[ 9354.869556] kvm [4191]: vcpu0 unhandled wrmsr: 0x684 data 0
[ 9354.869610] kvm [4191]: vcpu0 unhandled wrmsr: 0x6c4 data 0
[ 9376.448998] kvm [4191]: vcpu0 unhandled rdmsr: 0x611
[ 9376.449052] kvm [4191]: vcpu0 unhandled rdmsr: 0x639
[ 9376.449100] kvm [4191]: vcpu0 unhandled rdmsr: 0x641
[ 9376.449148] kvm [4191]: vcpu0 unhandled rdmsr: 0x619
[ 9472.705412] INFO: task btrfs-transacti:3874 blocked for more than 120 
seconds.
[ 9472.705499]       Tainted: G            E 3.16.0-rc6+ #64
[ 9472.705547] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
[ 9472.705629] btrfs-transacti D ffff88042fc943c0     0  3874      2 
0x00000000
[ 9472.705688]  ffff8803fb9dfca0 0000000000000002 ffff8800c4214b90 
ffff8803fb9dffd8
[ 9472.705778]  00000000000143c0 00000000000143c0 ffff88041977b260 
ffff8803d29f23a0
[ 9472.705868]  ffff8803d29f23a8 7fffffffffffffff ffff8800c4214b90 
ffff880232e2c0a8
[ 9472.705976] Call Trace:
[ 9472.706019]  [<ffffffff8170eb59>] schedule+0x29/0x70
[ 9472.706067]  [<ffffffff8170df99>] schedule_timeout+0x209/0x280
[ 9472.706118]  [<ffffffff8170874b>] ? __slab_free+0xfe/0x2c3
[ 9472.706169]  [<ffffffff810829f4>] ? wake_up_worker+0x24/0x30
[ 9472.706219]  [<ffffffff8170f656>] wait_for_completion+0xa6/0x160
[ 9472.706270]  [<ffffffff8109d4e0>] ? wake_up_state+0x20/0x20
[ 9472.706338]  [<ffffffffc045b936>] 
btrfs_wait_and_free_delalloc_work+0x16/0x30 [btrfs]
[ 9472.706429]  [<ffffffffc04658be>] 
btrfs_run_ordered_operations+0x1ee/0x2c0 [btrfs]
[ 9472.706518]  [<ffffffffc044bbb7>] btrfs_commit_transaction+0x27/0xa40 
[btrfs]
[ 9472.706579]  [<ffffffffc0447d65>] transaction_kthread+0x1b5/0x240 [btrfs]
[ 9472.706637]  [<ffffffffc0447bb0>] ? 
btrfs_cleanup_transaction+0x560/0x560 [btrfs]
[ 9472.706720]  [<ffffffff8108cc52>] kthread+0xd2/0xf0
[ 9472.706767]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9472.706820]  [<ffffffff81712dfc>] ret_from_fork+0x7c/0xb0
[ 9472.706869]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9472.706926] INFO: task kworker/u16:3:6932 blocked for more than 120 
seconds.
[ 9472.706980]       Tainted: G            E 3.16.0-rc6+ #64
[ 9472.707029] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
[ 9472.707112] kworker/u16:3   D ffff88042fd943c0     0  6932      2 
0x00000000
[ 9472.707179] Workqueue: btrfs-flush_delalloc normal_work_helper [btrfs]
[ 9472.707233]  ffff88035f1bbb58 0000000000000002 ffff880417e564c0 
ffff88035f1bbfd8
[ 9472.707322]  00000000000143c0 00000000000143c0 ffff8800c1a03260 
ffff88042fd94cd8
[ 9472.707413]  ffff88042ffb4be8 ffff88035f1bbbe0 0000000000000002 
ffffffff81159930
[ 9472.707502] Call Trace:
[ 9472.707543]  [<ffffffff81159930>] ? wait_on_page_read+0x60/0x60
[ 9472.707593]  [<ffffffff8170ee6d>] io_schedule+0x9d/0x130
[ 9472.707643]  [<ffffffff8115993e>] sleep_on_page+0xe/0x20
[ 9472.707697]  [<ffffffff8170f3e8>] __wait_on_bit_lock+0x48/0xb0
[ 9472.707770]  [<ffffffff81159a4a>] __lock_page+0x6a/0x70
[ 9472.707820]  [<ffffffff810b14a0>] ? autoremove_wake_function+0x40/0x40
[ 9472.707884]  [<ffffffffc046711e>] ? flush_write_bio+0xe/0x10 [btrfs]
[ 9472.707947]  [<ffffffffc046b0c0>] 
extent_write_cache_pages.isra.28.constprop.46+0x3d0/0x3f0 [btrfs]
[ 9472.713000]  [<ffffffffc046cd2d>] extent_writepages+0x4d/0x70 [btrfs]
[ 9472.713092]  [<ffffffffc04506f0>] ? btrfs_submit_direct+0x6a0/0x6a0 
[btrfs]
[ 9472.713148]  [<ffffffff810b0d78>] ? __wake_up_common+0x58/0x90
[ 9472.713203]  [<ffffffffc044e1d8>] btrfs_writepages+0x28/0x30 [btrfs]
[ 9472.713256]  [<ffffffff811668ee>] do_writepages+0x1e/0x40
[ 9472.713305]  [<ffffffff8115b409>] __filemap_fdatawrite_range+0x59/0x60
[ 9472.713359]  [<ffffffff8115b4bc>] filemap_flush+0x1c/0x20
[ 9472.713425]  [<ffffffffc045158a>] btrfs_run_delalloc_work+0x5a/0xa0 
[btrfs]
[ 9472.713491]  [<ffffffffc047accf>] normal_work_helper+0x11f/0x2b0 [btrfs]
[ 9472.713547]  [<ffffffff810858d2>] process_one_work+0x182/0x450
[ 9472.713598]  [<ffffffff81086043>] worker_thread+0x123/0x5a0
[ 9472.713648]  [<ffffffff81085f20>] ? rescuer_thread+0x380/0x380
[ 9472.713723]  [<ffffffff8108cc52>] kthread+0xd2/0xf0
[ 9472.713794]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9472.713849]  [<ffffffff81712dfc>] ret_from_fork+0x7c/0xb0
[ 9472.713898]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9472.713953] INFO: task kworker/u16:9:8360 blocked for more than 120 
seconds.
[ 9472.714028]       Tainted: G            E 3.16.0-rc6+ #64
[ 9472.714076] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
[ 9472.714159] kworker/u16:9   D ffff88042fd543c0     0  8360      2 
0x00000000
[ 9472.714225] Workqueue: btrfs-delalloc normal_work_helper [btrfs]
[ 9472.714277]  ffff88035f20f838 0000000000000002 ffff8803216c1930 
ffff88035f20ffd8
[ 9472.714366]  00000000000143c0 00000000000143c0 ffff8804157e0000 
ffff88042fd54cd8
[ 9472.714455]  ffff88042ffad6e8 ffff88035f20f8c0 0000000000000002 
ffffffff81159930
[ 9472.714544] Call Trace:
[ 9472.714584]  [<ffffffff81159930>] ? wait_on_page_read+0x60/0x60
[ 9472.714636]  [<ffffffff8170ee6d>] io_schedule+0x9d/0x130
[ 9472.714684]  [<ffffffff8115993e>] sleep_on_page+0xe/0x20
[ 9472.714734]  [<ffffffff8170f3e8>] __wait_on_bit_lock+0x48/0xb0
[ 9472.714785]  [<ffffffff8133c8a4>] ? blk_finish_plug+0x14/0x40
[ 9472.714835]  [<ffffffff81159a4a>] __lock_page+0x6a/0x70
[ 9472.714884]  [<ffffffff810b14a0>] ? autoremove_wake_function+0x40/0x40
[ 9472.714938]  [<ffffffff8115ac34>] pagecache_get_page+0x164/0x1f0
[ 9472.714999]  [<ffffffffc048e447>] io_ctl_prepare_pages+0x67/0x180 [btrfs]
[ 9472.715088]  [<ffffffffc048fd0f>] __load_free_space_cache+0x1ff/0x700 
[btrfs]
[ 9472.715152]  [<ffffffffc049030c>] load_free_space_cache+0xfc/0x1c0 
[btrfs]
[ 9472.715213]  [<ffffffffc0430292>] cache_block_group+0x192/0x390 [btrfs]
[ 9472.715267]  [<ffffffff810b1460>] ? prepare_to_wait_event+0x100/0x100
[ 9472.715328]  [<ffffffffc04390a2>] find_free_extent+0xc72/0xcb0 [btrfs]
[ 9472.715388]  [<ffffffffc04392ef>] btrfs_reserve_extent+0xaf/0x1b0 [btrfs]
[ 9472.715451]  [<ffffffffc04526cc>] cow_file_range+0x13c/0x430 [btrfs]
[ 9472.715511]  [<ffffffffc0453c80>] ? 
submit_compressed_extents+0x480/0x480 [btrfs]
[ 9472.715602]  [<ffffffffc04539d2>] 
submit_compressed_extents+0x1d2/0x480 [btrfs]
[ 9472.715691]  [<ffffffffc0451524>] ? async_cow_free+0x24/0x30 [btrfs]
[ 9472.715748]  [<ffffffffc0453c80>] ? 
submit_compressed_extents+0x480/0x480 [btrfs]
[ 9472.715835]  [<ffffffffc0453d06>] async_cow_submit+0x86/0x90 [btrfs]
[ 9472.715894]  [<ffffffffc047ad43>] normal_work_helper+0x193/0x2b0 [btrfs]
[ 9472.715948]  [<ffffffff810858d2>] process_one_work+0x182/0x450
[ 9472.715998]  [<ffffffff81086043>] worker_thread+0x123/0x5a0
[ 9472.716047]  [<ffffffff81085f20>] ? rescuer_thread+0x380/0x380
[ 9472.716098]  [<ffffffff8108cc52>] kthread+0xd2/0xf0
[ 9472.716145]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9472.716197]  [<ffffffff81712dfc>] ret_from_fork+0x7c/0xb0
[ 9472.716246]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9472.716303] INFO: task kworker/u16:0:11682 blocked for more than 120 
seconds.
[ 9472.716357]       Tainted: G            E 3.16.0-rc6+ #64
[ 9472.716405] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
[ 9472.716488] kworker/u16:0   D ffff88042fd543c0     0 11682      2 
0x00000000
[ 9472.716548] Workqueue: writeback bdi_writeback_workfn (flush-btrfs-3)
[ 9472.716610]  ffff8802161cf9d8 0000000000000002 ffff8803fc3c0000 
ffff8802161cffd8
[ 9472.716699]  00000000000143c0 00000000000143c0 ffff8804197e0000 
ffff88042fd54cd8
[ 9472.716788]  ffff88042ffaa9e8 ffff8802161cfa60 0000000000000002 
ffffffff81159930
[ 9472.716878] Call Trace:
[ 9472.716917]  [<ffffffff81159930>] ? wait_on_page_read+0x60/0x60
[ 9472.716968]  [<ffffffff8170ee6d>] io_schedule+0x9d/0x130
[ 9472.717017]  [<ffffffff8115993e>] sleep_on_page+0xe/0x20
[ 9472.717067]  [<ffffffff8170f3e8>] __wait_on_bit_lock+0x48/0xb0
[ 9472.717118]  [<ffffffff81159a4a>] __lock_page+0x6a/0x70
[ 9472.717167]  [<ffffffff810b14a0>] ? autoremove_wake_function+0x40/0x40
[ 9472.717231]  [<ffffffffc046711e>] ? flush_write_bio+0xe/0x10 [btrfs]
[ 9472.717293]  [<ffffffffc046b0c0>] 
extent_write_cache_pages.isra.28.constprop.46+0x3d0/0x3f0 [btrfs]
[ 9472.717402]  [<ffffffffc046cd2d>] extent_writepages+0x4d/0x70 [btrfs]
[ 9472.717465]  [<ffffffffc04506f0>] ? btrfs_submit_direct+0x6a0/0x6a0 
[btrfs]
[ 9472.717553]  [<ffffffffc044e1d8>] btrfs_writepages+0x28/0x30 [btrfs]
[ 9472.717607]  [<ffffffff811668ee>] do_writepages+0x1e/0x40
[ 9472.717657]  [<ffffffff811f17d0>] __writeback_single_inode+0x40/0x210
[ 9472.717710]  [<ffffffff811f22b7>] writeback_sb_inodes+0x247/0x3e0
[ 9472.717784]  [<ffffffff811f24ef>] __writeback_inodes_wb+0x9f/0xd0
[ 9472.717836]  [<ffffffff811f2763>] wb_writeback+0x243/0x2c0
[ 9472.717885]  [<ffffffff811f4e09>] bdi_writeback_workfn+0x1b9/0x430
[ 9472.717938]  [<ffffffff810858d2>] process_one_work+0x182/0x450
[ 9472.717990]  [<ffffffff81086043>] worker_thread+0x123/0x5a0
[ 9472.718039]  [<ffffffff81085f20>] ? rescuer_thread+0x380/0x380
[ 9472.718089]  [<ffffffff8108cc52>] kthread+0xd2/0xf0
[ 9472.718136]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
[ 9472.718188]  [<ffffffff81712dfc>] ret_from_fork+0x7c/0xb0
[ 9472.718237]  [<ffffffff8108cb80>] ? kthread_create_on_node+0x180/0x180
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Chris Mason July 22, 2014, 7:50 p.m. UTC | #12
On 07/22/2014 03:42 PM, Torbjørn wrote:
> On 07/22/2014 04:53 PM, Chris Mason wrote:
>>
>> On 07/19/2014 02:23 PM, Martin Steigerwald wrote:
>>
>>>> Running 3.15.6 with this patch applied on top:
>>>>   - still causes a hang with `rsync -hPaHAXx --del /mnt/home/nyx/
>>>> /home/nyx/`
>>>> - no extra error messages printed (`dmesg | grep racing`) compared to
>>>> without the patch
>>> I got same results with 3.16-rc5 + this patch (see thread BTRFS hang
>>> with
>>> 3.16-rc5). 3.16-rc4 still is fine with me. No hang whatsoever so far.
>>>
>>>> To recap some details (so I can have it all in one place):
>>>>   - /home/ is btrfs with compress=lzo
>>> BTRFS RAID 1 with lzo.
>>>
>>>>   - I have _not_ created any nodatacow files.
>>> Me neither.
>>>
>>>>   - Full stack is: sata <-> dmcrypt <-> lvm <-> btrfs (I noticed others
>>>> mentioning the use of dmcrypt)
>>> Same, except no dmcrypt.
>>>
>> Thanks for the help in tracking this down everyone.  We'll get there!
>> Are you all running multi-disk systems (from a btrfs POV, more than one
>> device?)  I don't care how many physical drives this maps to, just does
>> btrfs think there's more than one drive.
>>
>> -chris
> 3.16-rc6 with your patch on top still causes hangs here.
> No traces of "racing" in dmesg
> Hang is on a btrfs raid 0 consisting of 3 drives.
> Full stack is: sata <-> dmcrypt <-> btrfs raid0
> 
> Hang was caused by
> 1. Several rsync -av --inplace --delete <source> <backup subvol>
> 2. btrfs subvolume snapshot -r <backup subvol> <bacup snap>
> 
> The rsync jobs are done one at a time
> btrfs is stuck when trying to create the read only snapshot

The trace is similar, but you're stuck trying to read the free space
cache.  This one I saw earlier this morning, but I haven't seen these
parts from the 3.15 bug reports.

Maybe they are related though, I'll dig into the 3.15 bug reports again.

-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Torbjørn July 22, 2014, 8:10 p.m. UTC | #13
On 07/22/2014 09:50 PM, Chris Mason wrote:
> On 07/22/2014 03:42 PM, Torbjørn wrote:
>> On 07/22/2014 04:53 PM, Chris Mason wrote:
>>> On 07/19/2014 02:23 PM, Martin Steigerwald wrote:
>>>
>>>>> Running 3.15.6 with this patch applied on top:
>>>>>    - still causes a hang with `rsync -hPaHAXx --del /mnt/home/nyx/
>>>>> /home/nyx/`
>>>>> - no extra error messages printed (`dmesg | grep racing`) compared to
>>>>> without the patch
>>>> I got same results with 3.16-rc5 + this patch (see thread BTRFS hang
>>>> with
>>>> 3.16-rc5). 3.16-rc4 still is fine with me. No hang whatsoever so far.
>>>>
>>>>> To recap some details (so I can have it all in one place):
>>>>>    - /home/ is btrfs with compress=lzo
>>>> BTRFS RAID 1 with lzo.
>>>>
>>>>>    - I have _not_ created any nodatacow files.
>>>> Me neither.
>>>>
>>>>>    - Full stack is: sata <-> dmcrypt <-> lvm <-> btrfs (I noticed others
>>>>> mentioning the use of dmcrypt)
>>>> Same, except no dmcrypt.
>>>>
>>> Thanks for the help in tracking this down everyone.  We'll get there!
>>> Are you all running multi-disk systems (from a btrfs POV, more than one
>>> device?)  I don't care how many physical drives this maps to, just does
>>> btrfs think there's more than one drive.
>>>
>>> -chris
>> 3.16-rc6 with your patch on top still causes hangs here.
>> No traces of "racing" in dmesg
>> Hang is on a btrfs raid 0 consisting of 3 drives.
>> Full stack is: sata <-> dmcrypt <-> btrfs raid0
>>
>> Hang was caused by
>> 1. Several rsync -av --inplace --delete <source> <backup subvol>
>> 2. btrfs subvolume snapshot -r <backup subvol> <bacup snap>
>>
>> The rsync jobs are done one at a time
>> btrfs is stuck when trying to create the read only snapshot
> The trace is similar, but you're stuck trying to read the free space
> cache.  This one I saw earlier this morning, but I haven't seen these
> parts from the 3.15 bug reports.
>
> Maybe they are related though, I'll dig into the 3.15 bug reports again.
>
> -chris
In case it was not clear, this hang was on a different btrfs volume than 
the 3.15 hang (but the same server).
Earlier the affected volume was readable during the hang. This time the 
volume is not readable either.

I'll keep the patched 3.16 running and see if I can trigger something 
similar to the 3.15 hang.

Thanks

--
Torbjørn
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Martin Steigerwald July 22, 2014, 9:13 p.m. UTC | #14
Am Dienstag, 22. Juli 2014, 10:53:03 schrieb Chris Mason:
> On 07/19/2014 02:23 PM, Martin Steigerwald wrote:
> >> Running 3.15.6 with this patch applied on top:
> >>  - still causes a hang with `rsync -hPaHAXx --del /mnt/home/nyx/
> >>  /home/nyx/`
> >> 
> >> - no extra error messages printed (`dmesg | grep racing`) compared to
> >> without the patch
> > 
> > I got same results with 3.16-rc5 + this patch (see thread BTRFS hang with
> > 3.16-rc5). 3.16-rc4 still is fine with me. No hang whatsoever so far.
> > 
> >> To recap some details (so I can have it all in one place):
> >>  - /home/ is btrfs with compress=lzo
> > 
> > BTRFS RAID 1 with lzo.
> > 
> >>  - I have _not_ created any nodatacow files.
> > 
> > Me neither.
> > 
> >>  - Full stack is: sata <-> dmcrypt <-> lvm <-> btrfs (I noticed others
> >> 
> >> mentioning the use of dmcrypt)
> > 
> > Same, except no dmcrypt.
> 
> Thanks for the help in tracking this down everyone.  We'll get there!
> Are you all running multi-disk systems (from a btrfs POV, more than one
> device?)  I don't care how many physical drives this maps to, just does
> btrfs think there's more than one drive.

As I told before I am using BTRFS RAID 1. Two logival volumes on two distinct 
SSDs. RAID is directly in BTRFS, no SoftRAID here (which I wouldn´t want to 
use with SSDs anyway).
Chris Mason July 22, 2014, 9:15 p.m. UTC | #15
On 07/22/2014 05:13 PM, Martin Steigerwald wrote:
> Am Dienstag, 22. Juli 2014, 10:53:03 schrieb Chris Mason:
>> On 07/19/2014 02:23 PM, Martin Steigerwald wrote:
>>>> Running 3.15.6 with this patch applied on top:
>>>>  - still causes a hang with `rsync -hPaHAXx --del /mnt/home/nyx/
>>>>  /home/nyx/`
>>>>
>>>> - no extra error messages printed (`dmesg | grep racing`) compared to
>>>> without the patch
>>>
>>> I got same results with 3.16-rc5 + this patch (see thread BTRFS hang with
>>> 3.16-rc5). 3.16-rc4 still is fine with me. No hang whatsoever so far.
>>>
>>>> To recap some details (so I can have it all in one place):
>>>>  - /home/ is btrfs with compress=lzo
>>>
>>> BTRFS RAID 1 with lzo.
>>>
>>>>  - I have _not_ created any nodatacow files.
>>>
>>> Me neither.
>>>
>>>>  - Full stack is: sata <-> dmcrypt <-> lvm <-> btrfs (I noticed others
>>>>
>>>> mentioning the use of dmcrypt)
>>>
>>> Same, except no dmcrypt.
>>
>> Thanks for the help in tracking this down everyone.  We'll get there!
>> Are you all running multi-disk systems (from a btrfs POV, more than one
>> device?)  I don't care how many physical drives this maps to, just does
>> btrfs think there's more than one drive.
> 
> As I told before I am using BTRFS RAID 1. Two logival volumes on two distinct 
> SSDs. RAID is directly in BTRFS, no SoftRAID here (which I wouldn´t want to 
> use with SSDs anyway).
> 

When you say logical volumes, you mean LVM right?  Just making sure I
know all the pieces involved.

-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rich Freeman July 23, 2014, 1:06 a.m. UTC | #16
On Tue, Jul 22, 2014 at 10:53 AM, Chris Mason <clm@fb.com> wrote:
>
> Thanks for the help in tracking this down everyone.  We'll get there!
> Are you all running multi-disk systems (from a btrfs POV, more than one
> device?)  I don't care how many physical drives this maps to, just does
> btrfs think there's more than one drive.

I've been away on vacation so I haven't been able to try your latest
patch, but I can try whatever is out there starting this weekend.

I was getting fairly consistent hangs during heavy IO (especially
rsync) on 3.15 with lzo enabled.  This is on raid1 across 5 drives,
directly against the partitions themselves (no dmcrypt, mdadm, lvm,
etc).  I disabled lzo and haven't had problems since.  I'm now running
on mainline without issue, but I think I did see the hang on mainline
when I tried enabling lzo again briefly.

Rich
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Felix Seidel July 23, 2014, 6:38 a.m. UTC | #17
On 23 Jul 2014, at 03:06, Rich Freeman <r-btrfs@thefreemanclan.net> wrote:
> I disabled lzo and haven't had problems since.  I'm now running
> on mainline without issue, but I think I did see the hang on mainline
> when I tried enabling lzo again briefly.

Can confirm. I’m running mainline 3.16rc5 and was experiencing deadlocks
when having LZO enabled. 
Disabled it, now all seems ok.

Using btrfs RAID1 -> dm-crypt -> SATA.

I’ve attached some more dmesg “blocked” messages using kernel versions
3.15.5, 3.14.6 and 3.16rc5 just in case it helps anyone.
Jul 18 23:36:58 nas kernel: INFO: task sudo:1214 blocked for more than 120 seconds.
Jul 18 23:36:58 nas kernel:       Tainted: G           O  3.15.5-2-ARCH #1
Jul 18 23:36:58 nas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 18 23:36:58 nas kernel: sudo            D 0000000000000000     0  1214      1 0x00000004
Jul 18 23:36:58 nas kernel:  ffff88001d0ebc20 0000000000000086 ffff88002cca5bb0 0000000000014700
Jul 18 23:36:58 nas kernel:  ffff88001d0ebfd8 0000000000014700 ffff88002cca5bb0 0000000000000000
Jul 18 23:36:58 nas kernel:  ffff880028ee4000 0000000000000003 00000000284e0d53 0000000000000002
Jul 18 23:36:58 nas kernel: Call Trace:
Jul 18 23:36:58 nas kernel:  [<ffffffff815110dc>] ? __do_page_fault+0x2ec/0x600
Jul 18 23:36:58 nas kernel:  [<ffffffff81509fa9>] schedule+0x29/0x70
Jul 18 23:36:58 nas kernel:  [<ffffffff8150a426>] schedule_preempt_disabled+0x16/0x20
Jul 18 23:36:58 nas kernel:  [<ffffffff8150bda5>] __mutex_lock_slowpath+0xe5/0x230
Jul 18 23:36:58 nas kernel:  [<ffffffff8150bf07>] mutex_lock+0x17/0x30
Jul 18 23:36:58 nas kernel:  [<ffffffff811bfa24>] lookup_slow+0x34/0xc0
Jul 18 23:36:58 nas kernel:  [<ffffffff811c1b73>] path_lookupat+0x723/0x880
Jul 18 23:36:58 nas kernel:  [<ffffffff8114f111>] ? release_pages+0xc1/0x280
Jul 18 23:36:58 nas kernel:  [<ffffffff811bfd97>] ? getname_flags+0x37/0x130
Jul 18 23:36:58 nas kernel:  [<ffffffff811c1cf6>] filename_lookup.isra.30+0x26/0x80
Jul 18 23:36:58 nas kernel:  [<ffffffff811c4fd7>] user_path_at_empty+0x67/0xd0
Jul 18 23:36:58 nas kernel:  [<ffffffff81172b52>] ? unmap_region+0xe2/0x130
Jul 18 23:36:58 nas kernel:  [<ffffffff811c5051>] user_path_at+0x11/0x20
Jul 18 23:36:58 nas kernel:  [<ffffffff811b979a>] vfs_fstatat+0x6a/0xd0
Jul 18 23:36:58 nas kernel:  [<ffffffff811b981b>] vfs_stat+0x1b/0x20
Jul 18 23:36:58 nas kernel:  [<ffffffff811b9df9>] SyS_newstat+0x29/0x60
Jul 18 23:36:58 nas kernel:  [<ffffffff8117501c>] ? vm_munmap+0x4c/0x60
Jul 18 23:36:58 nas kernel:  [<ffffffff81175f92>] ? SyS_munmap+0x22/0x30
Jul 18 23:36:58 nas kernel:  [<ffffffff81515fa9>] system_call_fastpath+0x16/0x1b
---
Jul 19 18:34:17 nas kernel: INFO: task rsync:4900 blocked for more than 120 seconds.
Jul 19 18:34:17 nas kernel:       Tainted: G           O  3.15.5-2-ARCH #1
Jul 19 18:34:17 nas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 19 18:34:17 nas kernel: rsync           D 0000000000000000     0  4900   4899 0x00000000
Jul 19 18:34:17 nas kernel:  ffff880005947c20 0000000000000082 ffff880034aa4750 0000000000014700
Jul 19 18:34:17 nas kernel:  ffff880005947fd8 0000000000014700 ffff880034aa4750 ffffffff810a5995
Jul 19 18:34:17 nas kernel:  ffff88011fc14700 ffff8800dd828a30 ffff8800cece6a00 ffff880005947bd8
Jul 19 18:34:17 nas kernel: Call Trace:
Jul 19 18:34:17 nas kernel:  [<ffffffff810a5995>] ? set_next_entity+0x95/0xb0
Jul 19 18:34:17 nas kernel:  [<ffffffff810ac0be>] ? pick_next_task_fair+0x46e/0x550
Jul 19 18:34:17 nas kernel:  [<ffffffff810136c1>] ? __switch_to+0x1f1/0x540
Jul 19 18:34:17 nas kernel:  [<ffffffff81509fa9>] schedule+0x29/0x70
Jul 19 18:34:17 nas kernel:  [<ffffffff8150a426>] schedule_preempt_disabled+0x16/0x20
Jul 19 18:34:17 nas kernel:  [<ffffffff8150bda5>] __mutex_lock_slowpath+0xe5/0x230
Jul 19 18:34:17 nas kernel:  [<ffffffff8150bf07>] mutex_lock+0x17/0x30
Jul 19 18:34:17 nas kernel:  [<ffffffff811bfa24>] lookup_slow+0x34/0xc0
Jul 19 18:34:17 nas kernel:  [<ffffffff811c1b73>] path_lookupat+0x723/0x880
Jul 19 18:34:17 nas kernel:  [<ffffffff8150a2bf>] ? io_schedule+0xbf/0xf0
Jul 19 18:34:17 nas kernel:  [<ffffffff8150a7d1>] ? __wait_on_bit_lock+0x91/0xb0
Jul 19 18:34:17 nas kernel:  [<ffffffff811bfd97>] ? getname_flags+0x37/0x130
Jul 19 18:34:17 nas kernel:  [<ffffffff811c1cf6>] filename_lookup.isra.30+0x26/0x80
Jul 19 18:34:17 nas kernel:  [<ffffffff811c4fd7>] user_path_at_empty+0x67/0xd0
Jul 19 18:34:17 nas kernel:  [<ffffffff811c5051>] user_path_at+0x11/0x20
Jul 19 18:34:17 nas kernel:  [<ffffffff811b979a>] vfs_fstatat+0x6a/0xd0
Jul 19 18:34:17 nas kernel:  [<ffffffff811d4414>] ? mntput+0x24/0x40
Jul 19 18:34:17 nas kernel:  [<ffffffff811b983e>] vfs_lstat+0x1e/0x20
Jul 19 18:34:17 nas kernel:  [<ffffffff811b9e59>] SyS_newlstat+0x29/0x60
Jul 19 18:34:17 nas kernel:  [<ffffffff8108a3c4>] ? task_work_run+0xa4/0xe0
Jul 19 18:34:17 nas kernel:  [<ffffffff8150e939>] ? do_device_not_available+0x19/0x20
Jul 19 18:34:17 nas kernel:  [<ffffffff8151760e>] ? device_not_available+0x1e/0x30
Jul 19 18:34:17 nas kernel:  [<ffffffff81515fa9>] system_call_fastpath+0x16/0x1b
---
Jul 19 23:58:39 nas kernel: INFO: task rsync:3407 blocked for more than 120 seconds.
Jul 19 23:58:39 nas kernel:       Not tainted 3.14.6-1-ARCH #1
Jul 19 23:58:39 nas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 19 23:58:39 nas kernel: rsync           D ffff8800cd9e3110     0  3407   3405 0x00000000
Jul 19 23:58:39 nas kernel:  ffff8800a2f21bc0 0000000000000086 ffff8800a2f21b10 ffff8800cd9e3110
Jul 19 23:58:39 nas kernel:  00000000000146c0 ffff8800a2f21fd8 00000000000146c0 ffff8800cd9e3110
Jul 19 23:58:39 nas kernel:  00000000ffffffef ffff8800dd51bf60 ffffffffa02f4c5c ffff8800a2f21b28
Jul 19 23:58:39 nas kernel: Call Trace:
Jul 19 23:58:39 nas kernel:  [<ffffffffa02f4c5c>] ? __set_extent_bit+0x45c/0x550 [btrfs]
Jul 19 23:58:39 nas kernel:  [<ffffffffa02f4263>] ? free_extent_state+0x43/0xc0 [btrfs]
Jul 19 23:58:39 nas kernel:  [<ffffffffa02f4c5c>] ? __set_extent_bit+0x45c/0x550 [btrfs]
Jul 19 23:58:39 nas kernel:  [<ffffffff8150bb09>] schedule+0x29/0x70
Jul 19 23:58:39 nas kernel:  [<ffffffffa02f5b72>] lock_extent_bits+0x152/0x1e0 [btrfs]
Jul 19 23:58:39 nas kernel:  [<ffffffff810b3ec0>] ? __wake_up_sync+0x20/0x20
Jul 19 23:58:39 nas kernel:  [<ffffffffa02e39b9>] btrfs_evict_inode+0x139/0x520 [btrfs]
Jul 19 23:58:39 nas kernel:  [<ffffffff811d5ac0>] evict+0xb0/0x1c0
Jul 19 23:58:39 nas kernel:  [<ffffffff811d6375>] iput+0xf5/0x1a0
Jul 19 23:58:39 nas kernel:  [<ffffffff811d1bc0>] dentry_kill+0x240/0x290
Jul 19 23:58:39 nas kernel:  [<ffffffff811d1c8b>] dput+0x7b/0x120
Jul 19 23:58:39 nas kernel:  [<ffffffff811cc0da>] SyS_renameat+0x2ca/0x400
Jul 19 23:58:39 nas kernel:  [<ffffffff811bd001>] ? __sb_end_write+0x31/0x60
Jul 19 23:58:39 nas kernel:  [<ffffffff811d9111>] ? mnt_drop_write+0x31/0x50
Jul 19 23:58:39 nas kernel:  [<ffffffff811b81c3>] ? chmod_common+0xe3/0x160
Jul 19 23:58:39 nas kernel:  [<ffffffff811da334>] ? mntput+0x24/0x40
Jul 19 23:58:39 nas kernel:  [<ffffffff811c478e>] ? path_put+0x1e/0x30
Jul 19 23:58:39 nas kernel:  [<ffffffff811b96e2>] ? SyS_fchmodat+0x62/0xc0
Jul 19 23:58:39 nas kernel:  [<ffffffff811cc22b>] SyS_rename+0x1b/0x20
Jul 19 23:58:39 nas kernel:  [<ffffffff81517b29>] system_call_fastpath+0x16/0x1b
---
Jul 21 22:31:02 nas kernel: INFO: task kworker/u8:1:12866 blocked for more than 120 seconds.
Jul 21 22:31:02 nas kernel:       Not tainted 3.16.0-1-mainline #1
Jul 21 22:31:02 nas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 21 22:31:02 nas kernel: kworker/u8:1    D 0000000000000000     0 12866      2 0x00000000
Jul 21 22:31:02 nas kernel: Workqueue: btrfs-delalloc normal_work_helper [btrfs]
Jul 21 22:31:02 nas kernel:  ffff88010760f5d8 0000000000000046 ffff8800d293bd20 00000000000145c0
Jul 21 22:31:02 nas kernel:  ffff88010760ffd8 00000000000145c0 ffff8800d293bd20 0000000000000001
Jul 21 22:31:02 nas kernel:  ffffffff810b6c30 ffff88010760f5c8 ffff88010d4bc2e8 ffff8800a45b4000
Jul 21 22:31:02 nas kernel: Call Trace:
Jul 21 22:31:02 nas kernel:  [<ffffffff810b6c30>] ? __wake_up_sync+0x20/0x20
Jul 21 22:31:02 nas kernel:  [<ffffffffa0357406>] ? btrfs_bio_counter_dec+0x26/0x60 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa0312665>] ? btrfs_map_bio+0x4b5/0x520 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffff8114b750>] ? filemap_fdatawait+0x30/0x30
Jul 21 22:31:02 nas kernel:  [<ffffffff8151e919>] schedule+0x29/0x70
Jul 21 22:31:02 nas kernel:  [<ffffffff8151ebf4>] io_schedule+0x94/0xf0
Jul 21 22:31:02 nas kernel:  [<ffffffff8114b75e>] sleep_on_page+0xe/0x20
Jul 21 22:31:02 nas kernel:  [<ffffffff8151f014>] __wait_on_bit+0x64/0x90
Jul 21 22:31:02 nas kernel:  [<ffffffff8114b509>] wait_on_page_bit+0x89/0xa0
Jul 21 22:31:02 nas kernel:  [<ffffffff810b6c70>] ? autoremove_wake_function+0x40/0x40
Jul 21 22:31:02 nas kernel:  [<ffffffffa030943a>] read_extent_buffer_pages+0x30a/0x350 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02dc1e0>] ? free_root_pointers+0x60/0x60 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02ddd01>] btree_read_extent_buffer_pages.constprop.45+0xb1/0x110 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02de378>] read_tree_block+0x38/0x60 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02be770>] read_block_for_search.isra.29+0x160/0x3b0 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02c429a>] btrfs_next_old_leaf+0x26a/0x460 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02c44a0>] btrfs_next_leaf+0x10/0x20 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa030e924>] find_free_dev_extent+0xc4/0x360 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa030ed8a>] __btrfs_alloc_chunk+0x1ca/0x810 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa03119b4>] btrfs_alloc_chunk+0x34/0x40 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02cc82e>] do_chunk_alloc+0x1de/0x3d0 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02d1e65>] find_free_extent+0xbd5/0xe00 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02d2268>] btrfs_reserve_extent+0x68/0x160 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02ec36b>] cow_file_range+0x13b/0x430 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02ed715>] submit_compressed_extents+0x215/0x4a0 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02ed9a0>] ? submit_compressed_extents+0x4a0/0x4a0 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02eda26>] async_cow_submit+0x86/0x90 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa0316735>] normal_work_helper+0x205/0x350 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffff8108afd8>] process_one_work+0x168/0x450
Jul 21 22:31:02 nas kernel:  [<ffffffff8108b79c>] worker_thread+0x1fc/0x550
Jul 21 22:31:02 nas kernel:  [<ffffffff8108b5a0>] ? init_pwq.part.22+0x10/0x10
Jul 21 22:31:02 nas kernel:  [<ffffffff81091d1a>] kthread+0xea/0x100
Jul 21 22:31:02 nas kernel:  [<ffffffff81091c30>] ? kthread_create_on_node+0x1b0/0x1b0
Jul 21 22:31:02 nas kernel:  [<ffffffff8152277c>] ret_from_fork+0x7c/0xb0
Jul 21 22:31:02 nas kernel:  [<ffffffff81091c30>] ? kthread_create_on_node+0x1b0/0x1b0
---
Jul 21 22:31:02 nas kernel: INFO: task afpd:15988 blocked for more than 120 seconds.
Jul 21 22:31:02 nas kernel:       Not tainted 3.16.0-1-mainline #1
Jul 21 22:31:02 nas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 21 22:31:02 nas kernel: afpd            D 0000000000000001     0 15988  11686 0x00000004
Jul 21 22:31:02 nas kernel:  ffff88010d083c48 0000000000000086 ffff8800d8d88000 00000000000145c0
Jul 21 22:31:02 nas kernel:  ffff88010d083fd8 00000000000145c0 ffff8800d8d88000 ffff8800a4505000
Jul 21 22:31:02 nas kernel:  0000000000000002 00000000000a6fff ffff88010d083c28 ffffffffa02ed342
Jul 21 22:31:02 nas kernel: Call Trace:
Jul 21 22:31:02 nas kernel:  [<ffffffffa02ed342>] ? run_delalloc_range+0x192/0x350 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffff8114b750>] ? filemap_fdatawait+0x30/0x30
Jul 21 22:31:02 nas kernel:  [<ffffffff8151e919>] schedule+0x29/0x70
Jul 21 22:31:02 nas kernel:  [<ffffffff8151ebf4>] io_schedule+0x94/0xf0
Jul 21 22:31:02 nas kernel:  [<ffffffff8114b75e>] sleep_on_page+0xe/0x20
Jul 21 22:31:02 nas kernel:  [<ffffffff8151f138>] __wait_on_bit_lock+0x48/0xb0
Jul 21 22:31:02 nas kernel:  [<ffffffff8114b8b8>] __lock_page+0x78/0x90
Jul 21 22:31:02 nas kernel:  [<ffffffff810b6c70>] ? autoremove_wake_function+0x40/0x40
Jul 21 22:31:02 nas kernel:  [<ffffffffa0305a18>] extent_write_cache_pages.isra.28.constprop.43+0x2d8/0x410 [btr
Jul 21 22:31:02 nas kernel:  [<ffffffffa0307cec>] extent_writepages+0x5c/0x90 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02e9780>] ? btrfs_direct_IO+0x350/0x350 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffffa02e8518>] btrfs_writepages+0x28/0x30 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffff8115913e>] do_writepages+0x1e/0x30
Jul 21 22:31:02 nas kernel:  [<ffffffff8114d22d>] __filemap_fdatawrite_range+0x5d/0x80
Jul 21 22:31:02 nas kernel:  [<ffffffff8114d313>] filemap_fdatawrite_range+0x13/0x20
Jul 21 22:31:02 nas kernel:  [<ffffffffa02f68ad>] btrfs_sync_file+0x5d/0x320 [btrfs]
Jul 21 22:31:02 nas kernel:  [<ffffffff811f1e11>] do_fsync+0x51/0x80
Jul 21 22:31:02 nas kernel:  [<ffffffff811f20c0>] SyS_fsync+0x10/0x20
Jul 21 22:31:02 nas kernel:  [<ffffffff81522829>] system_call_fastpath+0x16/0x1b
---
Jul 21 22:33:02 nas kernel: INFO: task btrfs-transacti:921 blocked for more than 120 seconds.
Jul 21 22:33:02 nas kernel:       Not tainted 3.16.0-1-mainline #1
Jul 21 22:33:02 nas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 21 22:33:02 nas kernel: btrfs-transacti D 0000000000000000     0   921      2 0x00000000
Jul 21 22:33:02 nas kernel:  ffff8800d27dbe00 0000000000000046 ffff8800d28d5bb0 00000000000145c0
Jul 21 22:33:02 nas kernel:  ffff8800d27dbfd8 00000000000145c0 ffff8800d28d5bb0 ffff880101f72480
Jul 21 22:33:02 nas kernel:  ffffffffa033b5a6 ffff8800d27dbda8 ffffffffa033b864 ffff8800d18f95a0
Jul 21 22:33:02 nas kernel: Call Trace:
Jul 21 22:33:02 nas kernel:  [<ffffffffa033b5a6>] ? btrfs_release_delayed_inode+0x46/0x60 [btrfs]
Jul 21 22:33:02 nas kernel:  [<ffffffffa033b864>] ? __btrfs_update_delayed_inode+0x1a4/0x210 [btrfs]
Jul 21 22:33:02 nas kernel:  [<ffffffff811a570a>] ? kmem_cache_alloc+0x16a/0x170
Jul 21 22:33:02 nas kernel:  [<ffffffffa02bb89a>] ? btrfs_alloc_path+0x1a/0x20 [btrfs]
Jul 21 22:33:02 nas kernel:  [<ffffffffa034344a>] ? btrfs_scrub_pause+0xea/0x110 [btrfs]
Jul 21 22:33:02 nas kernel:  [<ffffffff8151e919>] schedule+0x29/0x70
Jul 21 22:33:02 nas kernel:  [<ffffffffa02e5745>] btrfs_commit_transaction+0x405/0xa50 [btrfs]
Jul 21 22:33:02 nas kernel:  [<ffffffff810b6c30>] ? __wake_up_sync+0x20/0x20
Jul 21 22:33:02 nas kernel:  [<ffffffffa02e1275>] transaction_kthread+0x1e5/0x250 [btrfs]
Jul 21 22:33:02 nas kernel:  [<ffffffffa02e1090>] ? btrfs_cleanup_transaction+0x5a0/0x5a0 [btrfs]
Jul 21 22:33:02 nas kernel:  [<ffffffff81091d1a>] kthread+0xea/0x100
Jul 21 22:33:02 nas kernel:  [<ffffffff81090000>] ? parse_args+0x4b0/0x4e0
Jul 21 22:33:02 nas kernel:  [<ffffffff81091c30>] ? kthread_create_on_node+0x1b0/0x1b0
Jul 21 22:33:02 nas kernel:  [<ffffffff8152277c>] ret_from_fork+0x7c/0xb0
Jul 21 22:33:02 nas kernel:  [<ffffffff81091c30>] ? kthread_create_on_node+0x1b0/0x1b0
Martin Steigerwald July 23, 2014, 11:13 a.m. UTC | #18
Am Dienstag, 22. Juli 2014, 17:15:21 schrieb Chris Mason:
> On 07/22/2014 05:13 PM, Martin Steigerwald wrote:
> > Am Dienstag, 22. Juli 2014, 10:53:03 schrieb Chris Mason:
> >> On 07/19/2014 02:23 PM, Martin Steigerwald wrote:
> >>>> Running 3.15.6 with this patch applied on top:
> >>>>  - still causes a hang with `rsync -hPaHAXx --del /mnt/home/nyx/
> >>>>  /home/nyx/`
> >>>> 
> >>>> - no extra error messages printed (`dmesg | grep racing`) compared to
> >>>> without the patch
> >>> 
> >>> I got same results with 3.16-rc5 + this patch (see thread BTRFS hang
> >>> with
> >>> 3.16-rc5). 3.16-rc4 still is fine with me. No hang whatsoever so far.
> >>> 
> >>>> To recap some details (so I can have it all in one place):
> >>>>  - /home/ is btrfs with compress=lzo
> >>> 
> >>> BTRFS RAID 1 with lzo.
> >>> 
> >>>>  - I have _not_ created any nodatacow files.
> >>> 
> >>> Me neither.
> >>> 
> >>>>  - Full stack is: sata <-> dmcrypt <-> lvm <-> btrfs (I noticed others
> >>>> 
> >>>> mentioning the use of dmcrypt)
> >>> 
> >>> Same, except no dmcrypt.
> >> 
> >> Thanks for the help in tracking this down everyone.  We'll get there!
> >> Are you all running multi-disk systems (from a btrfs POV, more than one
> >> device?)  I don't care how many physical drives this maps to, just does
> >> btrfs think there's more than one drive.
> > 
> > As I told before I am using BTRFS RAID 1. Two logival volumes on two
> > distinct SSDs. RAID is directly in BTRFS, no SoftRAID here (which I
> > wouldn´t want to use with SSDs anyway).
> 
> When you say logical volumes, you mean LVM right?  Just making sure I
> know all the pieces involved.

Exactly.

As a recap from the other thread:

merkaba:~> btrfs fi sh /home
Label: 'home'  uuid: […]
        Total devices 2 FS bytes used 123.20GiB
        devid    1 size 160.00GiB used 159.98GiB path /dev/mapper/msata-home
        devid    2 size 160.00GiB used 159.98GiB path /dev/dm-3

Btrfs v3.14.1

merkaba:~#1> btrfs fi df /home
Data, RAID1: total=154.95GiB, used=120.61GiB
System, RAID1: total=32.00MiB, used=48.00KiB
Metadata, RAID1: total=5.00GiB, used=2.59GiB
unknown, single: total=512.00MiB, used=0.00
merkaba:~> df -hT /home
Dateisystem            Typ   Größe Benutzt Verf. Verw% Eingehängt auf
/dev/mapper/msata-home btrfs  320G    247G   69G   79% /home

merkaba:~> file -sk /dev/sata/home
/dev/sata/home: symbolic link to `../dm-3'
merkaba:~> file -sk /dev/dm-3     
/dev/dm-3: BTRFS Filesystem label "home", sectorsize 4096, nodesize 16384, 
leafsize 16384, UUID=[…], 
132303151104/343597383680 bytes used, 2 devices


And LVM layout:

merkaba:~> lsblk
NAME                    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                       8:0    0 279,5G  0 disk 
??sda1                    8:1    0     4M  0 part 
??sda2                    8:2    0   191M  0 part 
??sda3                    8:3    0   286M  0 part 
??sda4                    8:4    0   279G  0 part 
  ??sata-home (dm-3)    254:3    0   160G  0 lvm  
  ??sata-swap (dm-4)    254:4    0    12G  0 lvm  [SWAP]
  ??sata-debian (dm-5)  254:5    0    30G  0 lvm  
sdb                       8:16   0 447,1G  0 disk 
??sdb1                    8:17   0   200M  0 part 
??sdb2                    8:18   0   300M  0 part /boot
??sdb3                    8:19   0 446,7G  0 part 
  ??msata-home (dm-0)   254:0    0   160G  0 lvm  
  ??msata-daten (dm-1)  254:1    0   200G  0 lvm  
  ??msata-debian (dm-2) 254:2    0    30G  0 lvm  
sr0                      11:0    1  1024M  0 rom 

sda is Intel SSD 320 SATA

sdb is Crucial m500 mSATA

Thanks,
Charles Cazabon July 23, 2014, 1:20 p.m. UTC | #19
Chris Mason <clm@fb.com> wrote:
> 
> Thanks for the help in tracking this down everyone.  We'll get there!
> Are you all running multi-disk systems (from a btrfs POV, more than one
> device?)  I don't care how many physical drives this maps to, just does
> btrfs think there's more than one drive.

Not me, at least - I'm doing the device aggregation down at the LVM level
(sata-dmcrypt-lvm-btrfs stack), so it's presented to btrfs as a single logical
device.

Charles
Cody P Schafer July 25, 2014, 2:27 a.m. UTC | #20
On Tue, Jul 22, 2014 at 9:53 AM, Chris Mason <clm@fb.com> wrote:
>
>
> On 07/19/2014 02:23 PM, Martin Steigerwald wrote:
>
>>> Running 3.15.6 with this patch applied on top:
>>>  - still causes a hang with `rsync -hPaHAXx --del /mnt/home/nyx/ /home/nyx/`
>>> - no extra error messages printed (`dmesg | grep racing`) compared to
>>> without the patch
>>
>> I got same results with 3.16-rc5 + this patch (see thread BTRFS hang with
>> 3.16-rc5). 3.16-rc4 still is fine with me. No hang whatsoever so far.
>>
>>> To recap some details (so I can have it all in one place):
>>>  - /home/ is btrfs with compress=lzo
>>
>> BTRFS RAID 1 with lzo.
>>
>>>  - I have _not_ created any nodatacow files.
>>
>> Me neither.
>>
>>>  - Full stack is: sata <-> dmcrypt <-> lvm <-> btrfs (I noticed others
>>> mentioning the use of dmcrypt)
>>
>> Same, except no dmcrypt.
>>
>
> Thanks for the help in tracking this down everyone.  We'll get there!
> Are you all running multi-disk systems (from a btrfs POV, more than one
> device?)  I don't care how many physical drives this maps to, just does
> btrfs think there's more than one drive.

No, both of my btrfs filesystems are single disk.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tobias Holst Aug. 7, 2014, 3:12 p.m. UTC | #21
Hi

Is there anything new on this topic? I am using Ubuntu 14.04.1 and
experiencing the same problem.
- 6 HDDs
- LUKS on every HDD
- btrfs RAID6 over this 6 crypt-devices
No LVM, no nodatacow files.
Mount-options: defaults,compress-force=lzo,space_cache
With the original 3.13-kernel (3.13.0-32-generic) it is working fine.

Then I tried the following kernels from here:
http://kernel.ubuntu.com/~kernel-ppa/mainline/
linux-image-3.14.15-031415-generic_3.14.15-031415.201407311853_amd64.deb
- not even booting, kernel panic at boot.
linux-image-3.15.6-031506-generic_3.15.6-031506.201407172034_amd64.deb,
linux-image-3.15.7-031507-generic_3.15.7-031507.201407281235_amd64.deb,
and linux-image-3.16.0-031600-generic_3.16.0-031600.201408031935_amd64.deb
causing the hangs like described in this thread. When doing big IO
(unpacking a .rar-archive with multiple GB) the filesystem stops
working. Load stays very high but nothing actually happens on the
drives accoding to dstat. htop shows a D (uninterruptible sleep
(usually IO)) at many kworker-threads.
Unmounting of the btrfs-filesystem only works with -l (lazy) option.
Reboot or shutdown doesn't work because of the blocking threads. So
only a power cut works. After the reboot the last written data before
the hang is lost.

I am now back on 3.13.

Regards


2014-07-25 4:27 GMT+02:00 Cody P Schafer <dev@codyps.com>:
>
> On Tue, Jul 22, 2014 at 9:53 AM, Chris Mason <clm@fb.com> wrote:
> >
> >
> > On 07/19/2014 02:23 PM, Martin Steigerwald wrote:
> >
> >>> Running 3.15.6 with this patch applied on top:
> >>>  - still causes a hang with `rsync -hPaHAXx --del /mnt/home/nyx/ /home/nyx/`
> >>> - no extra error messages printed (`dmesg | grep racing`) compared to
> >>> without the patch
> >>
> >> I got same results with 3.16-rc5 + this patch (see thread BTRFS hang with
> >> 3.16-rc5). 3.16-rc4 still is fine with me. No hang whatsoever so far.
> >>
> >>> To recap some details (so I can have it all in one place):
> >>>  - /home/ is btrfs with compress=lzo
> >>
> >> BTRFS RAID 1 with lzo.
> >>
> >>>  - I have _not_ created any nodatacow files.
> >>
> >> Me neither.
> >>
> >>>  - Full stack is: sata <-> dmcrypt <-> lvm <-> btrfs (I noticed others
> >>> mentioning the use of dmcrypt)
> >>
> >> Same, except no dmcrypt.
> >>
> >
> > Thanks for the help in tracking this down everyone.  We'll get there!
> > Are you all running multi-disk systems (from a btrfs POV, more than one
> > device?)  I don't care how many physical drives this maps to, just does
> > btrfs think there's more than one drive.
>
> No, both of my btrfs filesystems are single disk.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Duncan Aug. 7, 2014, 4:05 p.m. UTC | #22
Tobias Holst posted on Thu, 07 Aug 2014 17:12:17 +0200 as excerpted:

> Is there anything new on this topic? I am using Ubuntu 14.04.1 and
> experiencing the same problem.
> - 6 HDDs - LUKS on every HDD - btrfs RAID6 over this 6 crypt-devices No
> LVM, no nodatacow files.
> Mount-options: defaults,compress-force=lzo,space_cache With the original
> 3.13-kernel (3.13.0-32-generic) it is working fine.

I see you're using compress-force.  See the recent replies to the
"Btrfs: fix compressed write corruption on enospc" thread.

I'm not /sure/ your case is directly related (tho the kworker code is 
pretty new and 3.13 may be working for you due to being before the 
migration to kworkers, supporting the case of it being either the same 
problem or another related to it), but that's certainly one problem 
they've recently traced down... to a bug in the kworker threads code, 
that starts a new worker that can race with the first instead of obeying 
a flag that says keep it on the first worker.

Looks like they're doing patch that takes a slower but safer path to work 
around the kworker bug for now, as that bug was just traced (there was 
another bug, with a patch available originally hiding the ultimate 
problem, but obviously that's only half the fix as it simply revealed 
another bug underneath) and fixing it properly is likely to take some 
time.  Now that it's basically traced the workaround patch should be 
published on-list shortly and should make it into 3.17 and back into the 
stables, altho I'm not sure it'll make it into 3.16.1, etc.

But there's certainly progress. =:^)
Charles Cazabon Aug. 12, 2014, 2:55 a.m. UTC | #23
The blocked tasks issue that got significantly worse in 3.15 -- did anything
go into 3.16 related to this?  I didn't see a single "btrfs" in Linus' 3.16
announcement, so I don't know whether it should be better, the same, or worse
in this respect...

I haven't seen a definite statement about this on this list, either.

Can someone more familiar with the state of development comment on this?

Charles
Liu Bo Aug. 12, 2014, 2:56 a.m. UTC | #24
On Mon, Aug 11, 2014 at 08:55:21PM -0600, Charles Cazabon wrote:
> The blocked tasks issue that got significantly worse in 3.15 -- did anything
> go into 3.16 related to this?  I didn't see a single "btrfs" in Linus' 3.16
> announcement, so I don't know whether it should be better, the same, or worse
> in this respect...
> 
> I haven't seen a definite statement about this on this list, either.
> 
> Can someone more familiar with the state of development comment on this?

Good news is that we've figured out the bug and the patch is already under
testing :-) 

thanks,
-liubo
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Duncan Aug. 12, 2014, 4:18 a.m. UTC | #25
Liu Bo posted on Tue, 12 Aug 2014 10:56:42 +0800 as excerpted:

> On Mon, Aug 11, 2014 at 08:55:21PM -0600, Charles Cazabon wrote:
>> The blocked tasks issue that got significantly worse in 3.15 -- did
>> anything go into 3.16 related to this?  I didn't see a single "btrfs"
>> in Linus' 3.16 announcement, so I don't know whether it should be
>> better, the same, or worse in this respect...
>> 
>> I haven't seen a definite statement about this on this list, either.
>> 
>> Can someone more familiar with the state of development comment on
>> this?
> 
> Good news is that we've figured out the bug and the patch is already
> under testing :-)

IOW, it's not in 3.16.0, but will hopefully make it into 3.16.2 (it'll 
likely be a too late for 3.16.1).
Marc MERLIN Aug. 12, 2014, 4:49 a.m. UTC | #26
On Mon, Aug 11, 2014 at 08:55:21PM -0600, Charles Cazabon wrote:
> The blocked tasks issue that got significantly worse in 3.15 -- did anything
> go into 3.16 related to this?  I didn't see a single "btrfs" in Linus' 3.16
> announcement, so I don't know whether it should be better, the same, or worse
> in this respect...
> 
> I haven't seen a definite statement about this on this list, either.

Yes, 3.15 is unusable for some workloads, mine included.
Go back to 3.14 until there is a patch in 3.16, which there isn't quite
as for right now, but very soon hopefully.

Note 3.16.0 is actually worse than 3.15 for me.

Marc
James Cloos Aug. 18, 2014, 8:34 p.m. UTC | #27
>>>>> "MM" == Marc MERLIN <marc@merlins.org> writes:

MM> Note 3.16.0 is actually worse than 3.15 for me.

Here (a single partition btrfs), 3.16.0 works fine, but 3.17-rc1 fails again.

My /var/log is also a compressed, single-partition btrfs; that doesn't
show the problem with any version.  Just the partition with git, svn and
rsync trees.

Last night's test of 3.17-rc1 showed the problem with the first git
pull, getting stuck reading FETCH_HEAD.  All repos on that fs failed
the same way.

But rebooting back to 3.16.0 let everything work perfectly.

-JimC
diff mbox

Patch

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 3668048..8ab56df 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -8157,6 +8157,13 @@  void btrfs_destroy_inode(struct inode *inode)
 		spin_unlock(&root->fs_info->ordered_root_lock);
 	}
 
+	spin_lock(&root->fs_info->ordered_root_lock);
+	if (!list_empty(&BTRFS_I(inode)->ordered_operations)) {
+		list_del_init(&BTRFS_I(inode)->ordered_operations);
+printk(KERN_CRIT "racing inode deletion with ordered operations!!!!!!!!!!!\n");
+	}
+	spin_unlock(&root->fs_info->ordered_root_lock);
+
 	if (test_bit(BTRFS_INODE_HAS_ORPHAN_ITEM,
 		     &BTRFS_I(inode)->runtime_flags)) {
 		btrfs_info(root->fs_info, "inode %llu still on the orphan list",