Message ID | 20150818144900.GA26478@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 08/18, Oleg Nesterov wrote: > > When I tried to run all tests, I > got the new reports from lockdep. Just in case... when I run all tests I see misc failures (with or without the changes above) which I didn't try to interpret. In particular xfs/073 just hangs, "shutdown -r" doesn't work, the serial console continues to print [10784.605139] XFS (loop2): metadata I/O error: block 0x7d4948 ("xfs_buf_iodone_callbacks") error 5 numblks 8 [10784.605207] loop: Write error at byte offset 26843578368, length 4096. [10784.605222] loop: Write error at byte offset 26843578368, length 4096. [10784.605235] loop: Write error at byte offset 26843578368, length 4096. [10784.605248] loop: Write error at byte offset 26843578368, length 4096. [10784.605261] loop: Write error at byte offset 26843578368, length 4096. [10784.605437] blk_update_request: I/O error, dev loop2, sector 52428864 [10784.605642] blk_update_request: I/O error, dev loop2, sector 52429104 [10784.605897] blk_update_request: I/O error, dev loop2, sector 104960064 [10784.606028] blk_update_request: I/O error, dev loop2, sector 104960304 [10784.606069] blk_update_request: I/O error, dev loop2, sector 104960416 [10784.606307] blk_update_request: I/O error, dev loop2, sector 157286464 [10784.606414] blk_update_request: I/O error, dev loop2, sector 157286704 [10784.635455] XFS: Failing async write: 55598 callbacks suppressed [10784.635458] XFS (loop2): Failing async write on buffer block 0x3200070. Retrying async write. [10784.635460] XFS (loop2): Failing async write on buffer block 0x3200068. Retrying async write. [10784.635463] XFS (loop2): Failing async write on buffer block 0x9600060. Retrying async write. [10784.635465] XFS (loop2): Failing async write on buffer block 0x3200060. Retrying async write. [10784.635468] XFS (loop2): Failing async write on buffer block 0x9600050. Retrying async write. [10784.635469] XFS (loop2): Failing async write on buffer block 0x9600040. Retrying async write. [10784.635470] XFS (loop2): Failing async write on buffer block 0x6419050. Retrying async write. [10784.635472] XFS (loop2): Failing async write on buffer block 0x6419040. Retrying async write. [10784.635473] XFS (loop2): Failing async write on buffer block 0x3200050. Retrying async write. [10784.635474] XFS (loop2): Failing async write on buffer block 0x3200040. Retrying async write. [10789.577360] lo_write_bvec: 18978 callbacks suppressed [10789.583033] loop: Write error at byte offset 4203909120, length 4096. [10789.590232] blk_update_request: 10746 callbacks suppressed [10789.596370] blk_update_request: I/O error, dev loop2, sector 8210760 [10789.603498] loop: Write error at byte offset 4208594944, length 4096. [10789.610704] blk_update_request: I/O error, dev loop2, sector 8219912 [10789.617830] XFS (loop2): metadata I/O error: block 0x7d4948 ("xfs_buf_iodone_callbacks") error 5 numblks 8 again and again. Plus some tests need a lot of time (for example generic/127 more than 70 minutes!), so I never managed to run this test-suite completely. Also, tests/generic/078 on 4.2.0-rc6 (without other changes) triggered [ 2100.404545] BUG: looking up invalid subclass: 8 [ 2100.409600] turning off the locking correctness validator. [ 2100.415722] CPU: 11 PID: 45819 Comm: renameat2 Not tainted 4.2.0-rc6 #29 [ 2100.423190] Hardware name: Intel Corporation S2600CP/S2600CP, BIOS RMLSDP.86I.R3.27.D685.1305151734 05/15/2013 [ 2100.434349] 0000000000000000 000000002d12fba7 ffff880befce7a78 ffffffff817ee6f2 [ 2100.442641] 0000000000000000 0000000000000296 ffff880befce7b68 ffffffff810f2e94 [ 2100.450946] ffffffff826f95e0 ffff880c08834380 ffff880befce7ab8 ffffffff8102611f [ 2100.459241] Call Trace: [ 2100.461971] [<ffffffff817ee6f2>] dump_stack+0x45/0x57 [ 2100.467705] [<ffffffff810f2e94>] __lock_acquire+0xb14/0x2040 [ 2100.474107] [<ffffffff8102611f>] ? save_stack_trace+0x2f/0x50 [ 2100.480614] [<ffffffff810f2d6d>] ? __lock_acquire+0x9ed/0x2040 [ 2100.487221] [<ffffffff8110fa5d>] ? debug_lockdep_rcu_enabled+0x1d/0x20 [ 2100.494600] [<ffffffff810f508e>] lock_acquire+0xbe/0x150 [ 2100.500648] [<ffffffffa04edeee>] ? xfs_ilock+0x10e/0x160 [xfs] [ 2100.507266] [<ffffffffa04ee941>] ? xfs_lock_inodes+0x131/0x1e0 [xfs] [ 2100.514454] [<ffffffff810edbcc>] down_write_nested+0x3c/0x70 [ 2100.520875] [<ffffffffa04edeee>] ? xfs_ilock+0x10e/0x160 [xfs] [ 2100.527496] [<ffffffffa04ff803>] ? xfs_log_reserve+0x173/0x1c0 [xfs] [ 2100.534692] [<ffffffffa04edeee>] xfs_ilock+0x10e/0x160 [xfs] [ 2100.541104] [<ffffffffa04ee941>] xfs_lock_inodes+0x131/0x1e0 [xfs] [ 2100.548108] [<ffffffffa04f1c37>] xfs_rename+0x2d7/0x970 [xfs] [ 2100.554615] [<ffffffff810f1de3>] ? mark_held_locks+0x73/0xb0 [ 2100.561026] [<ffffffff810f202d>] ? trace_hardirqs_on+0xd/0x10 [ 2100.567546] [<ffffffffa04ebc33>] xfs_vn_rename+0xb3/0xf0 [xfs] [ 2100.574150] [<ffffffff81252512>] vfs_rename+0x552/0x880 [ 2100.580077] [<ffffffff81258467>] SyS_renameat2+0x537/0x590 [ 2100.586293] [<ffffffff817f70ae>] entry_SYSCALL_64_fastpath+0x12/0x76 [ 2101.268858] XFS (loop0): Unmounting Filesystem [ 2101.615330] XFS (loop0): Mounting V4 Filesystem [ 2101.660285] XFS (loop0): Ending clean mount [ 2101.694933] run fstests generic/079 at 2015-08-18 08:31:55 perhaps this was fixed by Dave's ILOCK patches. Oleg. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Aug 18, 2015 at 04:49:00PM +0200, Oleg Nesterov wrote: > Jan, Dave, perhaps you can take a look... > > On 08/14, Oleg Nesterov wrote: > > > > Plus another patch which removes the "trylock" > > hack in __sb_start_write(). > > I meant the patch we already discussed (attached at the end). And yes, > previously I reported it passed the tests. However, I only ran the same > 'grep -il freeze tests/*/???' tests. When I tried to run all tests, I > got the new reports from lockdep. > > [ 2098.281171] May be due to missing lock nesting notation <groan> > [ 2098.288744] 4 locks held by fsstress/50971: > [ 2098.293408] #0: (sb_writers#13){++++++}, at: [<ffffffff81248d32>] __sb_start_write+0x32/0x40 > [ 2098.303085] #1: (&type->i_mutex_dir_key#4/1){+.+.+.}, at: [<ffffffff8125685f>] filename_create+0x7f/0x170 > [ 2098.314038] #2: (sb_internal#2){++++++}, at: [<ffffffff81248d32>] __sb_start_write+0x32/0x40 > [ 2098.323711] #3: (&type->s_umount_key#54){++++++}, at: [<ffffffffa05a638c>] xfs_flush_inodes+0x1c/0x40 [xfs] > [ 2098.334898] > stack backtrace: > [ 2098.339762] CPU: 3 PID: 50971 Comm: fsstress Not tainted 4.2.0-rc6+ #27 > [ 2098.347143] Hardware name: Intel Corporation S2600CP/S2600CP, BIOS RMLSDP.86I.R3.27.D685.1305151734 05/15/2013 > [ 2098.358303] 0000000000000000 00000000e70ee864 ffff880c05a2b9c8 ffffffff817ee692 > [ 2098.366603] 0000000000000000 ffffffff826f8030 ffff880c05a2bab8 ffffffff810f45be > [ 2098.374900] 0000000000000000 ffff880c05a2bb20 0000000000000000 0000000000000000 > [ 2098.383197] Call Trace: > [ 2098.385930] [<ffffffff817ee692>] dump_stack+0x45/0x57 > [ 2098.391667] [<ffffffff810f45be>] __lock_acquire+0x1e9e/0x2040 > [ 2098.398177] [<ffffffff810f310d>] ? __lock_acquire+0x9ed/0x2040 > [ 2098.404787] [<ffffffff811d4702>] ? pagevec_lookup_entries+0x22/0x30 > [ 2098.411879] [<ffffffff811d5142>] ? truncate_inode_pages_range+0x2b2/0x7e0 > [ 2098.419551] [<ffffffff810f542e>] lock_acquire+0xbe/0x150 > [ 2098.425566] [<ffffffff81248d32>] ? __sb_start_write+0x32/0x40 > [ 2098.432079] [<ffffffff810ede91>] percpu_down_read+0x51/0xa0 > [ 2098.438396] [<ffffffff81248d32>] ? __sb_start_write+0x32/0x40 > [ 2098.444905] [<ffffffff81248d32>] __sb_start_write+0x32/0x40 > [ 2098.451244] [<ffffffffa05a7f84>] xfs_trans_alloc+0x24/0x40 [xfs] > [ 2098.458076] [<ffffffffa059e872>] xfs_inactive_truncate+0x32/0x110 [xfs] > [ 2098.465580] [<ffffffffa059f662>] xfs_inactive+0x102/0x120 [xfs] > [ 2098.472308] [<ffffffffa05a475b>] xfs_fs_evict_inode+0x7b/0xc0 [xfs] > [ 2098.479401] [<ffffffff812642ab>] evict+0xab/0x170 > [ 2098.484748] [<ffffffff81264568>] iput+0x1a8/0x230 > [ 2098.490100] [<ffffffff8127701c>] sync_inodes_sb+0x14c/0x1d0 > [ 2098.496439] [<ffffffffa05a6398>] xfs_flush_inodes+0x28/0x40 [xfs] > [ 2098.503361] [<ffffffffa059e213>] xfs_create+0x5c3/0x6d0 [xfs] Another false positive. We have a transaction handle here, but we've failed to reserve space for it so it's not an active transaction yet. i.e. the filesystem is operating at ENOSPC. On an ENOSPC error, we flush out dirty data to free up reserved delalloc space, which obviously is obviously then racing with an unlink of an inode in another thread and so drops the final reference to the inode, and it goes down the eviction path that runs transactions. But there is no deadlock here - while we have a transaction handle allocated at xfs_create() context, it is not yet active and holds no reservation, so it is safe for eviction here to run new transactions as there is no active transaction in progress. Cheers, Dave.
On Tue, Aug 18, 2015 at 05:18:16PM +0200, Oleg Nesterov wrote: > On 08/18, Oleg Nesterov wrote: > > > > When I tried to run all tests, I > > got the new reports from lockdep. > > Just in case... when I run all tests I see misc failures (with or without > the changes above) which I didn't try to interpret. In particular xfs/073 > just hangs, "shutdown -r" doesn't work, the serial console continues to > print > > [10784.605139] XFS (loop2): metadata I/O error: block 0x7d4948 ("xfs_buf_iodone_callbacks") error 5 numblks 8 EIO from the loop device. > [10784.605207] loop: Write error at byte offset 26843578368, length 4096. > [10784.605222] loop: Write error at byte offset 26843578368, length 4096. > [10784.605235] loop: Write error at byte offset 26843578368, length 4096. > [10784.605248] loop: Write error at byte offset 26843578368, length 4096. > [10784.605261] loop: Write error at byte offset 26843578368, length 4096. That makes me think the underlying filesystem (i.e. the fs the loop image file is hosted on) ran out of space. Everything will go to shit if that happens. > again and again. > > Plus some tests need a lot of time (for example generic/127 more than 70 minutes!), Yup, that one only does about 50,000 synchronous writes via fsx. If you want to run fast, use a virtual machine with 10GB RAM and use a pair of 4GB ramdisks as the storage. Or expunge the long running tests (look up the -X option) so they are skipped. > Also, tests/generic/078 on 4.2.0-rc6 (without other changes) triggered > > [ 2100.404545] BUG: looking up invalid subclass: 8 > [ 2100.409600] turning off the locking correctness validator. That's another lockdep annotation limitation I've had to work around. The RENAME_WHITEOUT changes caused that by introducing the need to lock 5 inodes at the same time (src/dst dir, src/dst file, whiteout inode) instead of only 4. > perhaps this was fixed by Dave's ILOCK patches. It greatly complicated those patches, you mean? But, yes, those patches should have fixed it - I had to shoehorn about 15 different XFS inode locking subclasses into the 8 subclasses that lockdep supports. And I used all 8 subclasses for the ILOCK annotations, so if theres some other annotation we need to add (e.g more than one level of ILOCK_PARENT lock nesting) then, well, we will need at least 16 subclasses to annotate everything. I hate lockdep. Cheers, Dave.
On 08/19, Dave Chinner wrote: > > On Tue, Aug 18, 2015 at 04:49:00PM +0200, Oleg Nesterov wrote: > > Jan, Dave, perhaps you can take a look... > > > > On 08/14, Oleg Nesterov wrote: > > > > > > Plus another patch which removes the "trylock" > > > hack in __sb_start_write(). > > > > I meant the patch we already discussed (attached at the end). And yes, > > previously I reported it passed the tests. However, I only ran the same > > 'grep -il freeze tests/*/???' tests. When I tried to run all tests, I > > got the new reports from lockdep. > > > > [ 2098.281171] May be due to missing lock nesting notation > > <groan> > > > [ 2098.288744] 4 locks held by fsstress/50971: > > [ 2098.293408] #0: (sb_writers#13){++++++}, at: [<ffffffff81248d32>] __sb_start_write+0x32/0x40 > > [ 2098.303085] #1: (&type->i_mutex_dir_key#4/1){+.+.+.}, at: [<ffffffff8125685f>] filename_create+0x7f/0x170 > > [ 2098.314038] #2: (sb_internal#2){++++++}, at: [<ffffffff81248d32>] __sb_start_write+0x32/0x40 > > [ 2098.323711] #3: (&type->s_umount_key#54){++++++}, at: [<ffffffffa05a638c>] xfs_flush_inodes+0x1c/0x40 [xfs] > > [ 2098.334898] > > stack backtrace: > > [ 2098.339762] CPU: 3 PID: 50971 Comm: fsstress Not tainted 4.2.0-rc6+ #27 > > [ 2098.347143] Hardware name: Intel Corporation S2600CP/S2600CP, BIOS RMLSDP.86I.R3.27.D685.1305151734 05/15/2013 > > [ 2098.358303] 0000000000000000 00000000e70ee864 ffff880c05a2b9c8 ffffffff817ee692 > > [ 2098.366603] 0000000000000000 ffffffff826f8030 ffff880c05a2bab8 ffffffff810f45be > > [ 2098.374900] 0000000000000000 ffff880c05a2bb20 0000000000000000 0000000000000000 > > [ 2098.383197] Call Trace: > > [ 2098.385930] [<ffffffff817ee692>] dump_stack+0x45/0x57 > > [ 2098.391667] [<ffffffff810f45be>] __lock_acquire+0x1e9e/0x2040 > > [ 2098.398177] [<ffffffff810f310d>] ? __lock_acquire+0x9ed/0x2040 > > [ 2098.404787] [<ffffffff811d4702>] ? pagevec_lookup_entries+0x22/0x30 > > [ 2098.411879] [<ffffffff811d5142>] ? truncate_inode_pages_range+0x2b2/0x7e0 > > [ 2098.419551] [<ffffffff810f542e>] lock_acquire+0xbe/0x150 > > [ 2098.425566] [<ffffffff81248d32>] ? __sb_start_write+0x32/0x40 > > [ 2098.432079] [<ffffffff810ede91>] percpu_down_read+0x51/0xa0 > > [ 2098.438396] [<ffffffff81248d32>] ? __sb_start_write+0x32/0x40 > > [ 2098.444905] [<ffffffff81248d32>] __sb_start_write+0x32/0x40 > > [ 2098.451244] [<ffffffffa05a7f84>] xfs_trans_alloc+0x24/0x40 [xfs] > > [ 2098.458076] [<ffffffffa059e872>] xfs_inactive_truncate+0x32/0x110 [xfs] > > [ 2098.465580] [<ffffffffa059f662>] xfs_inactive+0x102/0x120 [xfs] > > [ 2098.472308] [<ffffffffa05a475b>] xfs_fs_evict_inode+0x7b/0xc0 [xfs] > > [ 2098.479401] [<ffffffff812642ab>] evict+0xab/0x170 > > [ 2098.484748] [<ffffffff81264568>] iput+0x1a8/0x230 > > [ 2098.490100] [<ffffffff8127701c>] sync_inodes_sb+0x14c/0x1d0 > > [ 2098.496439] [<ffffffffa05a6398>] xfs_flush_inodes+0x28/0x40 [xfs] > > [ 2098.503361] [<ffffffffa059e213>] xfs_create+0x5c3/0x6d0 [xfs] > > Another false positive. Confused... Dave, I removed your explanation which I can't understand anyway, let me remind that I know absolutely nothing about filesystems. > But there is no deadlock here Yes, we hold SB_FREEZE_WRITE lock, so recursive SB_FREEZE_FS is safe. But, this means that the comment in __sb_start_write() is still correct, "XFS for example gets freeze protection on internal level twice" is true, and we can not remove this force_trylock hack. (Plus we have the jbd2_handle/sb_internal lock inversion but this is the same thing, safe because of the held lock on higher level). Or I misunderstood? Oleg. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Aug 19, 2015 at 05:00:26PM +0200, Oleg Nesterov wrote: > On 08/19, Dave Chinner wrote: > > > > On Tue, Aug 18, 2015 at 04:49:00PM +0200, Oleg Nesterov wrote: > > > Jan, Dave, perhaps you can take a look... > > > > > > On 08/14, Oleg Nesterov wrote: > > > > > > > > Plus another patch which removes the "trylock" > > > > hack in __sb_start_write(). > > > > > > I meant the patch we already discussed (attached at the end). And yes, > > > previously I reported it passed the tests. However, I only ran the same > > > 'grep -il freeze tests/*/???' tests. When I tried to run all tests, I > > > got the new reports from lockdep. > > > > > > [ 2098.281171] May be due to missing lock nesting notation > > > > <groan> > > > > > [ 2098.288744] 4 locks held by fsstress/50971: > > > [ 2098.293408] #0: (sb_writers#13){++++++}, at: [<ffffffff81248d32>] __sb_start_write+0x32/0x40 > > > [ 2098.303085] #1: (&type->i_mutex_dir_key#4/1){+.+.+.}, at: [<ffffffff8125685f>] filename_create+0x7f/0x170 > > > [ 2098.314038] #2: (sb_internal#2){++++++}, at: [<ffffffff81248d32>] __sb_start_write+0x32/0x40 > > > [ 2098.323711] #3: (&type->s_umount_key#54){++++++}, at: [<ffffffffa05a638c>] xfs_flush_inodes+0x1c/0x40 [xfs] > > > [ 2098.334898] > > > stack backtrace: > > > [ 2098.339762] CPU: 3 PID: 50971 Comm: fsstress Not tainted 4.2.0-rc6+ #27 > > > [ 2098.347143] Hardware name: Intel Corporation S2600CP/S2600CP, BIOS RMLSDP.86I.R3.27.D685.1305151734 05/15/2013 > > > [ 2098.358303] 0000000000000000 00000000e70ee864 ffff880c05a2b9c8 ffffffff817ee692 > > > [ 2098.366603] 0000000000000000 ffffffff826f8030 ffff880c05a2bab8 ffffffff810f45be > > > [ 2098.374900] 0000000000000000 ffff880c05a2bb20 0000000000000000 0000000000000000 > > > [ 2098.383197] Call Trace: > > > [ 2098.385930] [<ffffffff817ee692>] dump_stack+0x45/0x57 > > > [ 2098.391667] [<ffffffff810f45be>] __lock_acquire+0x1e9e/0x2040 > > > [ 2098.398177] [<ffffffff810f310d>] ? __lock_acquire+0x9ed/0x2040 > > > [ 2098.404787] [<ffffffff811d4702>] ? pagevec_lookup_entries+0x22/0x30 > > > [ 2098.411879] [<ffffffff811d5142>] ? truncate_inode_pages_range+0x2b2/0x7e0 > > > [ 2098.419551] [<ffffffff810f542e>] lock_acquire+0xbe/0x150 > > > [ 2098.425566] [<ffffffff81248d32>] ? __sb_start_write+0x32/0x40 > > > [ 2098.432079] [<ffffffff810ede91>] percpu_down_read+0x51/0xa0 > > > [ 2098.438396] [<ffffffff81248d32>] ? __sb_start_write+0x32/0x40 > > > [ 2098.444905] [<ffffffff81248d32>] __sb_start_write+0x32/0x40 > > > [ 2098.451244] [<ffffffffa05a7f84>] xfs_trans_alloc+0x24/0x40 [xfs] > > > [ 2098.458076] [<ffffffffa059e872>] xfs_inactive_truncate+0x32/0x110 [xfs] > > > [ 2098.465580] [<ffffffffa059f662>] xfs_inactive+0x102/0x120 [xfs] > > > [ 2098.472308] [<ffffffffa05a475b>] xfs_fs_evict_inode+0x7b/0xc0 [xfs] > > > [ 2098.479401] [<ffffffff812642ab>] evict+0xab/0x170 > > > [ 2098.484748] [<ffffffff81264568>] iput+0x1a8/0x230 > > > [ 2098.490100] [<ffffffff8127701c>] sync_inodes_sb+0x14c/0x1d0 > > > [ 2098.496439] [<ffffffffa05a6398>] xfs_flush_inodes+0x28/0x40 [xfs] > > > [ 2098.503361] [<ffffffffa059e213>] xfs_create+0x5c3/0x6d0 [xfs] > > > > Another false positive. > > Confused... > > Dave, I removed your explanation which I can't understand anyway, let > me remind that I know absolutely nothing about filesystems. > > > But there is no deadlock here > > Yes, we hold SB_FREEZE_WRITE lock, so recursive SB_FREEZE_FS is safe. > > But, this means that the comment in __sb_start_write() is still correct, > "XFS for example gets freeze protection on internal level twice" is true, > and we can not remove this force_trylock hack. You've hit a very rare corner case of a rare corner case. Please forgive me if I forget such paths - stuff that just works because it was required to be deadlock free is easy to forget because, well, it doesn't deadlock. Cheers, Dave.
diff --git a/fs/super.c b/fs/super.c index 1170dec..0b06e36 100644 --- a/fs/super.c +++ b/fs/super.c @@ -1169,36 +1169,12 @@ EXPORT_SYMBOL(__sb_end_write); */ int __sb_start_write(struct super_block *sb, int level, bool wait) { - bool force_trylock = false; - int ret = 1; - -#ifdef CONFIG_LOCKDEP - /* - * We want lockdep to tell us about possible deadlocks with freezing - * but it's it bit tricky to properly instrument it. Getting a freeze - * protection works as getting a read lock but there are subtle - * problems. XFS for example gets freeze protection on internal level - * twice in some cases, which is OK only because we already hold a - * freeze protection also on higher level. Due to these cases we have - * to use wait == F (trylock mode) which must not fail. - */ if (wait) { - int i; - - for (i = 0; i < level - 1; i++) - if (percpu_rwsem_is_held(sb->s_writers.rw_sem + i)) { - force_trylock = true; - break; - } - } -#endif - if (wait && !force_trylock) percpu_down_read(sb->s_writers.rw_sem + level-1); - else - ret = percpu_down_read_trylock(sb->s_writers.rw_sem + level-1); + return 1; + } - WARN_ON(force_trylock & !ret); - return ret; + return percpu_down_read_trylock(sb->s_writers.rw_sem + level-1); } EXPORT_SYMBOL(__sb_start_write);