diff mbox

[BUG] xfs/109 crashed 2k block size reflink enabled XFS

Message ID 20161205143906.GA16352@infradead.org (mailing list archive)
State Deferred, archived
Headers show

Commit Message

Christoph Hellwig Dec. 5, 2016, 2:39 p.m. UTC
On Mon, Dec 05, 2016 at 05:21:12PM +0800, Eryu Guan wrote:
> Hi,
> 
> I hit an xfs/109 crash today while testing reflink XFS with 2k block
> size on x86_64 hosts (both baremetal and kvm guest).
> 
> It can be reproduced by running xfs/109 many times, I tried 50-times
> loop twice and it crashed at the 21st and 46th runs. And I can reproduce
> it with both linus tree (4.9-rc4) and linux-xfs tree for-next branch
> (updated on 2016-11-30). I haven't been able to reproduce it with 4k
> block size XFS.

Haven't been able to reproduce it yet unfortunately.  But from looking
at the out of range block this looks like it could be NULLFSBLOCK
converted to a daddr.

I assume you are running without CONFIG_XFS_DEBUG or CONFIG_XFS_WARN
enabled?

Below would catch this issue in a non-debug build.  Still trying to
reproduce in the meantime..


--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Eryu Guan Dec. 6, 2016, 6:37 a.m. UTC | #1
On Mon, Dec 05, 2016 at 10:28:02AM -0800, Darrick J. Wong wrote:
> Does it happen if rmapbt=0 ?  Since xfs/109 isn't doing any CoW, it's

FYI, I looped xfs/109 for 150 times with rmapbt=0, and didn't hit any
problem.

[root@ibm-x3550m3-05 xfstests]# xfs_info /mnt/xfs
meta-data=/dev/mapper/testvg-testlv2 isize=512    agcount=4, agsize=20480 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=2048   blocks=81920, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=2048   blocks=1445, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Thanks,
Eryu
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig Dec. 6, 2016, 2:45 p.m. UTC | #2
I think the problem is that the extents -> btree conversion does not
use the per-AG reservations, but it should probably use it (even if it
predates if of course).

In the reproduce the fs still has enough blocks to allocate the
one block for the first bmap btree leave.  But all free space sits
in AGs with a lower agno then what we used for allocating the actual
extent, and thus xfs_alloc_vextent never manages to allocate it.
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Brian Foster Dec. 6, 2016, 3:19 p.m. UTC | #3
On Tue, Dec 06, 2016 at 06:45:59AM -0800, Christoph Hellwig wrote:
> I think the problem is that the extents -> btree conversion does not
> use the per-AG reservations, but it should probably use it (even if it
> predates if of course).
> 
> In the reproduce the fs still has enough blocks to allocate the
> one block for the first bmap btree leave.  But all free space sits
> in AGs with a lower agno then what we used for allocating the actual
> extent, and thus xfs_alloc_vextent never manages to allocate it.

Not that I have any insight into the problem here... :P but I'm still[1]
kind of wondering how that mechanism is supposed to work when it
ultimately calls xfs_mod_fdblocks() for each AG..?

Brian

[1] http://www.spinics.net/lists/linux-xfs/msg01509.html

> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Darrick J. Wong Dec. 6, 2016, 6:14 p.m. UTC | #4
On Tue, Dec 06, 2016 at 10:19:46AM -0500, Brian Foster wrote:
> On Tue, Dec 06, 2016 at 06:45:59AM -0800, Christoph Hellwig wrote:
> > I think the problem is that the extents -> btree conversion does not
> > use the per-AG reservations, but it should probably use it (even if it
> > predates if of course).
> > 
> > In the reproduce the fs still has enough blocks to allocate the
> > one block for the first bmap btree leave.  But all free space sits
> > in AGs with a lower agno then what we used for allocating the actual
> > extent, and thus xfs_alloc_vextent never manages to allocate it.
> 
> Not that I have any insight into the problem here... :P but I'm still[1]
> kind of wondering how that mechanism is supposed to work when it
> ultimately calls xfs_mod_fdblocks() for each AG..?

Oh, heh, I was meaning to reply to that and never did. :(

Will go work on that!

--D

> 
> Brian
> 
> [1] http://www.spinics.net/lists/linux-xfs/msg01509.html
> 
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Darrick J. Wong Dec. 7, 2016, 3:49 a.m. UTC | #5
On Tue, Dec 06, 2016 at 06:45:59AM -0800, Christoph Hellwig wrote:
> I think the problem is that the extents -> btree conversion does not
> use the per-AG reservations, but it should probably use it (even if it
> predates if of course).
> 
> In the reproduce the fs still has enough blocks to allocate the
> one block for the first bmap btree leave.  But all free space sits
> in AGs with a lower agno then what we used for allocating the actual
> extent, and thus xfs_alloc_vextent never manages to allocate it.

Wellll... I cobbled together a crappy patch that flips on
XFS_AG_RESV_AGFL if xfs_bmap_extents_to_btree really can't get a block.
It seems to have survived ~175 iterations of xfs/109 so I'll try to
clean it up tomorrow.

Not sure that helps Christoph's situation though... if you're not
running rmap or reflink then the AG reservation is always zero.

--D

> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig Dec. 7, 2016, 7:18 a.m. UTC | #6
On Tue, Dec 06, 2016 at 07:49:03PM -0800, Darrick J. Wong wrote:
> On Tue, Dec 06, 2016 at 06:45:59AM -0800, Christoph Hellwig wrote:
> > I think the problem is that the extents -> btree conversion does not
> > use the per-AG reservations, but it should probably use it (even if it
> > predates if of course).
> > 
> > In the reproduce the fs still has enough blocks to allocate the
> > one block for the first bmap btree leave.  But all free space sits
> > in AGs with a lower agno then what we used for allocating the actual
> > extent, and thus xfs_alloc_vextent never manages to allocate it.
> 
> Wellll... I cobbled together a crappy patch that flips on
> XFS_AG_RESV_AGFL if xfs_bmap_extents_to_btree really can't get a block.
> It seems to have survived ~175 iterations of xfs/109 so I'll try to
> clean it up tomorrow.

I tried it with XFS_AG_RESV_METADATA, but that didn't work.  But then
again I didn't add an additional reservation and I was about to head
out for dinner so I didn't investigate the details.  It might have been
the case Ross pointed out yeserday, so I'll look into the details more
today.

> Not sure that helps Christoph's situation though... if you're not
> running rmap or reflink then the AG reservation is always zero.

For now I was just running xfs/109.  The customer workload uses
reflinks, but not rmap.  That being said I think this issue can
in theory happen without eithr one due to the way the AG loop
works in xfs_alloc_vextext - while we have a reservation for the
indirect block(s) there is no guarantee it is in an AG that is
greater or equal to that use for the actual extent allocation.
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig Dec. 7, 2016, 5:40 p.m. UTC | #7
On Tue, Dec 06, 2016 at 11:18:57PM -0800, Christoph Hellwig wrote:
> > Wellll... I cobbled together a crappy patch that flips on
> > XFS_AG_RESV_AGFL if xfs_bmap_extents_to_btree really can't get a block.
> > It seems to have survived ~175 iterations of xfs/109 so I'll try to
> > clean it up tomorrow.
> 
> I tried it with XFS_AG_RESV_METADATA, but that didn't work.  But then
> again I didn't add an additional reservation and I was about to head
> out for dinner so I didn't investigate the details.  It might have been
> the case Ross pointed out yeserday, so I'll look into the details more
> today.

XFS_AG_RESV_AGFL works.  For some kinds of "work".  I can't see the
original issue anymore, but I can see this related assert a lot (which
I've also seen before, but no as often), so there is some more I need
to look into.

[ 2594.324341] XFS: Assertion failed: fs_is_ok, file: fs/xfs/libxfs/xfs_btree.c, line: 3484
[ 2594.329918] ------------[ cut here ]------------
[ 2594.330309] kernel BUG at fs/xfs/xfs_message.c:113!
[ 2594.330641] invalid opcode: 0000 [#1] SMP
[ 2594.330912] Modules linked in:
[ 2594.331129] CPU: 2 PID: 29744 Comm: kworker/u8:0 Tainted: G        W 4.9.0-rc1+ #1758
[ 2594.331680] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 2594.332353] Workqueue: writeback wb_workfn (flush-252:32)
[ 2594.332731] task: ffff88000d86ccc0 task.stack: ffffc90009f74000
[ 2594.333127] RIP: 0010:[<ffffffff815aee1d>]  [<ffffffff815aee1d>] assfail+0x1d/0x20
[ 2594.333214] RSP: 0018:ffffc90009f774c8  EFLAGS: 00010282
[ 2594.333214] RAX: 00000000ffffffea RBX: ffff880132b2ac08 RCX: 0000000000000021
[ 2594.333214] RDX: ffffc90009f773f0 RSI: 000000000000000a RDI: ffffffff8240a75b
[ 2594.333214] RBP: ffffc90009f774c8 R08: 0000000000000000 R09: 0000000000000000
[ 2594.333214] R10: 000000000000000a R11: f000000000000000 R12: ffff880132b2ac08
[ 2594.333214] R13: 0000000000000000 R14: ffffc90009f774ec R15: ffffc90009f775dc
[ 2594.333214] FS:  0000000000000000(0000) GS:ffff88013fd00000(0000) knlGS:0000000000000000
[ 2594.333214] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2594.333214] CR2: 00007f7f7c43c6c0 CR3: 0000000002606000 CR4: 00000000000006e0
[ 2594.333214] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 2594.333214] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 2594.333214] Stack:
[ 2594.333214]  ffffc90009f77568 ffffffff8154fddc ffffc90009f774ec 000000000000007b
[ 2594.333214]  0000000009f77568 ffffffffffffffff 0000000000000000 00c01e0000000000
[ 2594.333214]  1f0000b830000000 ffffffffffffffff 600f000000000000 ffffffff8157102d
[ 2594.333214] Call Trace:
[ 2594.333214]  [<ffffffff8154fddc>] xfs_btree_insert+0xac/0x1f0
[ 2594.333214]  [<ffffffff8157102d>] ? xfs_iext_insert+0xad/0x1e0
[ 2594.333214]  [<ffffffff81536802>] ? xfs_bmap_add_extent_delay_real+0xe22/0x3670
[ 2594.333214]  [<ffffffff8153887f>] xfs_bmap_add_extent_delay_real+0x2e9f/0x3670
[ 2594.333214]  [<ffffffff8154072a>] xfs_bmapi_write+0xb5a/0x1200
[ 2594.333214]  [<ffffffff815a45ad>] xfs_iomap_write_allocate+0x18d/0x370
[ 2594.333214]  [<ffffffff81587274>] xfs_map_blocks+0x214/0x460
[ 2594.333214]  [<ffffffff8158847c>] xfs_do_writepage+0x2bc/0x800
[ 2594.333214]  [<ffffffff811cbfea>] write_cache_pages+0x1fa/0x5a0
[ 2594.333214]  [<ffffffff815881c0>] ? xfs_aops_discard_page+0x140/0x140
[ 2594.333214]  [<ffffffff8158779e>] xfs_vm_writepages+0x9e/0xd0
[ 2594.333214]  [<ffffffff811ce77c>] do_writepages+0x1c/0x30
[ 2594.333214]  [<ffffffff8124a84c>] __writeback_single_inode+0x5c/0x6f0
[ 2594.333214]  [<ffffffff8124bb91>] writeback_sb_inodes+0x2a1/0x5e0
[ 2594.333214]  [<ffffffff8124c142>] wb_writeback+0x112/0x4f0
[ 2594.333214]  [<ffffffff8124cc05>] wb_workfn+0x115/0x5f0
[ 2594.333214]  [<ffffffff810f70fb>] ? process_one_work+0x13b/0x600
[ 2594.333214]  [<ffffffff810f7181>] process_one_work+0x1c1/0x600
[ 2594.333214]  [<ffffffff810f70fb>] ? process_one_work+0x13b/0x600
[ 2594.333214]  [<ffffffff810f7624>] worker_thread+0x64/0x4a0
[ 2594.333214]  [<ffffffff810f75c0>] ? process_one_work+0x600/0x600
[ 2594.333214]  [<ffffffff810f75c0>] ? process_one_work+0x600/0x600
[ 2594.333214]  [<ffffffff810fd2e2>] kthread+0xf2/0x110
[ 2594.333214]  [<ffffffff810d5a1e>] ? put_task_stack+0x15e/0x190
[ 2594.333214]  [<ffffffff810fd1f0>] ? kthread_park+0x60/0x60
[ 2594.333214]  [<ffffffff81e7156a>] ret_from_fork+0x2a/0x40

--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Darrick J. Wong Dec. 8, 2016, 6:35 a.m. UTC | #8
On Wed, Dec 07, 2016 at 09:40:32AM -0800, Christoph Hellwig wrote:
> On Tue, Dec 06, 2016 at 11:18:57PM -0800, Christoph Hellwig wrote:
> > > Wellll... I cobbled together a crappy patch that flips on
> > > XFS_AG_RESV_AGFL if xfs_bmap_extents_to_btree really can't get a block.
> > > It seems to have survived ~175 iterations of xfs/109 so I'll try to
> > > clean it up tomorrow.
> > 
> > I tried it with XFS_AG_RESV_METADATA, but that didn't work.  But then
> > again I didn't add an additional reservation and I was about to head
> > out for dinner so I didn't investigate the details.  It might have been
> > the case Ross pointed out yeserday, so I'll look into the details more
> > today.
> 
> XFS_AG_RESV_AGFL works.  For some kinds of "work".  I can't see the
> original issue anymore, but I can see this related assert a lot (which
> I've also seen before, but no as often), so there is some more I need
> to look into.

I bet that assert is a result of the btree insert failing to find a new
block to expand into.  I've felt for a while that we ought to yell ENOSPC
louder when this happens, since I've hit it numerous times and grumbled
about it not being obvious that we ran out of space.

Anyway, XFS_AG_RESV_AGFL only gets a reservation if rmapbt=1 (or if you
added an additional reservation after dinner), so if you're running
reflink only then it's not surprising that it still runs out of space,
since reflink=1 only reserves RESV_METADATA space.

In any case I'm persuaded that we're failing to account for that bmbt
expansion block when we make the first allocation.  AFAICT in
xfs_bmapi_allocate we ask the allocator for exactly as many blocks as we
need to satisfy the data block write; if there are exactly that many
blocks in the last AG then we get blocks out of that last AG.  But then
we have to convert extents_to_btree, so we make a second allocation
request (and we have to start with that same AG) for another block,
which it doesn't have, so it blows up.

It might work to just increase args->minfree if we have an extents file
and think we might have to convert it to btree format.  (It's late, not
going to try this until the morning.)

--D

> [ 2594.324341] XFS: Assertion failed: fs_is_ok, file: fs/xfs/libxfs/xfs_btree.c, line: 3484
> [ 2594.329918] ------------[ cut here ]------------
> [ 2594.330309] kernel BUG at fs/xfs/xfs_message.c:113!
> [ 2594.330641] invalid opcode: 0000 [#1] SMP
> [ 2594.330912] Modules linked in:
> [ 2594.331129] CPU: 2 PID: 29744 Comm: kworker/u8:0 Tainted: G        W 4.9.0-rc1+ #1758
> [ 2594.331680] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
> [ 2594.332353] Workqueue: writeback wb_workfn (flush-252:32)
> [ 2594.332731] task: ffff88000d86ccc0 task.stack: ffffc90009f74000
> [ 2594.333127] RIP: 0010:[<ffffffff815aee1d>]  [<ffffffff815aee1d>] assfail+0x1d/0x20
> [ 2594.333214] RSP: 0018:ffffc90009f774c8  EFLAGS: 00010282
> [ 2594.333214] RAX: 00000000ffffffea RBX: ffff880132b2ac08 RCX: 0000000000000021
> [ 2594.333214] RDX: ffffc90009f773f0 RSI: 000000000000000a RDI: ffffffff8240a75b
> [ 2594.333214] RBP: ffffc90009f774c8 R08: 0000000000000000 R09: 0000000000000000
> [ 2594.333214] R10: 000000000000000a R11: f000000000000000 R12: ffff880132b2ac08
> [ 2594.333214] R13: 0000000000000000 R14: ffffc90009f774ec R15: ffffc90009f775dc
> [ 2594.333214] FS:  0000000000000000(0000) GS:ffff88013fd00000(0000) knlGS:0000000000000000
> [ 2594.333214] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 2594.333214] CR2: 00007f7f7c43c6c0 CR3: 0000000002606000 CR4: 00000000000006e0
> [ 2594.333214] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [ 2594.333214] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> [ 2594.333214] Stack:
> [ 2594.333214]  ffffc90009f77568 ffffffff8154fddc ffffc90009f774ec 000000000000007b
> [ 2594.333214]  0000000009f77568 ffffffffffffffff 0000000000000000 00c01e0000000000
> [ 2594.333214]  1f0000b830000000 ffffffffffffffff 600f000000000000 ffffffff8157102d
> [ 2594.333214] Call Trace:
> [ 2594.333214]  [<ffffffff8154fddc>] xfs_btree_insert+0xac/0x1f0
> [ 2594.333214]  [<ffffffff8157102d>] ? xfs_iext_insert+0xad/0x1e0
> [ 2594.333214]  [<ffffffff81536802>] ? xfs_bmap_add_extent_delay_real+0xe22/0x3670
> [ 2594.333214]  [<ffffffff8153887f>] xfs_bmap_add_extent_delay_real+0x2e9f/0x3670
> [ 2594.333214]  [<ffffffff8154072a>] xfs_bmapi_write+0xb5a/0x1200
> [ 2594.333214]  [<ffffffff815a45ad>] xfs_iomap_write_allocate+0x18d/0x370
> [ 2594.333214]  [<ffffffff81587274>] xfs_map_blocks+0x214/0x460
> [ 2594.333214]  [<ffffffff8158847c>] xfs_do_writepage+0x2bc/0x800
> [ 2594.333214]  [<ffffffff811cbfea>] write_cache_pages+0x1fa/0x5a0
> [ 2594.333214]  [<ffffffff815881c0>] ? xfs_aops_discard_page+0x140/0x140
> [ 2594.333214]  [<ffffffff8158779e>] xfs_vm_writepages+0x9e/0xd0
> [ 2594.333214]  [<ffffffff811ce77c>] do_writepages+0x1c/0x30
> [ 2594.333214]  [<ffffffff8124a84c>] __writeback_single_inode+0x5c/0x6f0
> [ 2594.333214]  [<ffffffff8124bb91>] writeback_sb_inodes+0x2a1/0x5e0
> [ 2594.333214]  [<ffffffff8124c142>] wb_writeback+0x112/0x4f0
> [ 2594.333214]  [<ffffffff8124cc05>] wb_workfn+0x115/0x5f0
> [ 2594.333214]  [<ffffffff810f70fb>] ? process_one_work+0x13b/0x600
> [ 2594.333214]  [<ffffffff810f7181>] process_one_work+0x1c1/0x600
> [ 2594.333214]  [<ffffffff810f70fb>] ? process_one_work+0x13b/0x600
> [ 2594.333214]  [<ffffffff810f7624>] worker_thread+0x64/0x4a0
> [ 2594.333214]  [<ffffffff810f75c0>] ? process_one_work+0x600/0x600
> [ 2594.333214]  [<ffffffff810f75c0>] ? process_one_work+0x600/0x600
> [ 2594.333214]  [<ffffffff810fd2e2>] kthread+0xf2/0x110
> [ 2594.333214]  [<ffffffff810d5a1e>] ? put_task_stack+0x15e/0x190
> [ 2594.333214]  [<ffffffff810fd1f0>] ? kthread_park+0x60/0x60
> [ 2594.333214]  [<ffffffff81e7156a>] ret_from_fork+0x2a/0x40
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig Dec. 8, 2016, 2:30 p.m. UTC | #9
On Wed, Dec 07, 2016 at 10:35:03PM -0800, Darrick J. Wong wrote:
> I bet that assert is a result of the btree insert failing to find a new
> block to expand into.  I've felt for a while that we ought to yell ENOSPC
> louder when this happens, since I've hit it numerous times and grumbled
> about it not being obvious that we ran out of space.

Heh.  Took me a while to figure out what caused it last night as well.

> Anyway, XFS_AG_RESV_AGFL only gets a reservation if rmapbt=1 (or if you
> added an additional reservation after dinner), so if you're running
> reflink only then it's not surprising that it still runs out of space,
> since reflink=1 only reserves RESV_METADATA space.

I'm not running reflink only - this is the testcase from Eryu with
reflink and rmpbt for now.

But at that point I didn't add RESV_METADATA to xfs_bmbt_alloc_block.
With that one liner added xfs/109 seems to be doing fine so far, and
I've had it running for a few hours already today.  Note that this
is still without actually reserving additional block in
xfs_ag_resv_init, which is probably needed.
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
index c6eb219..2c19b11 100644
--- a/fs/xfs/libxfs/xfs_bmap.c
+++ b/fs/xfs/libxfs/xfs_bmap.c
@@ -780,12 +780,14 @@  try_another_ag:
 	if (xfs_sb_version_hasreflink(&cur->bc_mp->m_sb) &&
 	    args.fsbno == NULLFSBLOCK &&
 	    args.type == XFS_ALLOCTYPE_NEAR_BNO) {
+		printk("trying another AG\n");
 		dfops->dop_low = true;
 		goto try_another_ag;
 	}
 	/*
 	 * Allocation can't fail, the space was reserved.
 	 */
+	BUG_ON(args.fsbno == NULLFSBLOCK);
 	ASSERT(args.fsbno != NULLFSBLOCK);
 	ASSERT(*firstblock == NULLFSBLOCK ||
 	       args.agno == XFS_FSB_TO_AGNO(mp, *firstblock) ||