Message ID | 20160419222530.GU2187@wotan.suse.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 2016/04/20 7:25, Mark Fasheh wrote: > This has been broken since Linux v4.1. We may have worked out a solution on > the btrfs list but in the meantime sending a test to expose the issue seems > like a good idea. > > Signed-off-by: Mark Fasheh <mfasheh@suse.de> > --- > tests/btrfs/122 | 88 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ > tests/btrfs/group | 1 + You forgot to add tests/btrfs/122.out. > 2 files changed, 89 insertions(+) > create mode 100755 tests/btrfs/122 > > diff --git a/tests/btrfs/122 b/tests/btrfs/122 > new file mode 100755 > index 0000000..b7e9e4b > --- /dev/null > +++ b/tests/btrfs/122 > @@ -0,0 +1,88 @@ > +#! /bin/bash > +# FS QA Test No. btrfs/122 > +# > +# Test that qgroup counts are valid after snapshot creation. This has > +# been broken in btrfs since Linux v4.1 > +# > +#----------------------------------------------------------------------- > +# Copyright (C) 2016 SUSE Linux Products GmbH. All Rights Reserved. > +# > +# This program is free software; you can redistribute it and/or > +# modify it under the terms of the GNU General Public License as > +# published by the Free Software Foundation. > +# > +# This program is distributed in the hope that it would be useful, > +# but WITHOUT ANY WARRANTY; without even the implied warranty of > +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > +# GNU General Public License for more details. > +# > +# You should have received a copy of the GNU General Public License > +# along with this program; if not, write the Free Software Foundation, > +# Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA > +# > +#----------------------------------------------------------------------- > +# > + > +seq=`basename $0` > +seqres=$RESULT_DIR/$seq > +echo "QA output created by $seq" > + > +here=`pwd` > +tmp=/tmp/$$ > +status=1 # failure is the default! > +trap "_cleanup; exit \$status" 0 1 2 3 15 > + > +_cleanup() > +{ > + cd / > + rm -f $tmp.* > +} > + > +# get standard environment, filters and checks > +. ./common/rc > +. ./common/filter > + > +# remove previous $seqres.full before test > +rm -f $seqres.full > + > +# real QA test starts here > +_supported_fs btrfs > +_supported_os Linux > +_require_scratch > + > +rm -f $seqres.full > + > +# Force a small leaf size to make it easier to blow out our root > +# subvolume tree > +_scratch_mkfs "--nodesize 16384" nodesize 16384 is the default value. Do you intend other value, for example 4096? > +_scratch_mount > +_run_btrfs_util_prog quota enable $SCRATCH_MNT > + > +mkdir "$SCRATCH_MNT/snaps" > + > +# First make some simple snapshots - the bug was initially reproduced like this > +_run_btrfs_util_prog subvolume snapshot $SCRATCH_MNT "$SCRATCH_MNT/snaps/empty1" > +_run_btrfs_util_prog subvolume snapshot $SCRATCH_MNT "$SCRATCH_MNT/snaps/empty2" > + > +# This forces the fs tree out past level 0, adding at least one tree > +# block which must be properly accounted for when we make our next > +# snapshots. > +mkdir "$SCRATCH_MNT/data" > +for i in `seq 0 640`; do > + $XFS_IO_PROG -f -c "pwrite 0 1M" "$SCRATCH_MNT/data/file$i" > /dev/null 2>&1 > +done; ";" after "done" is not necessary. Thanks, Satoru > + > +# Snapshot twice. > +_run_btrfs_util_prog subvolume snapshot $SCRATCH_MNT "$SCRATCH_MNT/snaps/snap1" > +_run_btrfs_util_prog subvolume snapshot $SCRATCH_MNT "$SCRATCH_MNT/snaps/snap2" > + > +_scratch_unmount > + > +# generate a qgroup report and look for inconsistent groups > +$BTRFS_UTIL_PROG check --qgroup-report $SCRATCH_DEV 2>&1 | \ > + grep -q -E "Counts for qgroup.*are different" > +if [ $? -ne 0 ]; then > + status=0 > +fi > + > +exit > diff --git a/tests/btrfs/group b/tests/btrfs/group > index 9403daa..f7e8cff 100644 > --- a/tests/btrfs/group > +++ b/tests/btrfs/group > @@ -122,3 +122,4 @@ > 119 auto quick snapshot metadata qgroup > 120 auto quick snapshot metadata > 121 auto quick snapshot qgroup > +122 auto quick snapshot qgroup > -- To unsubscribe from this list: send the line "unsubscribe fstests" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Thank you for the review, comments are below. On Wed, Apr 20, 2016 at 09:48:54AM +0900, Satoru Takeuchi wrote: > On 2016/04/20 7:25, Mark Fasheh wrote: > >+# Force a small leaf size to make it easier to blow out our root > >+# subvolume tree > >+_scratch_mkfs "--nodesize 16384" > > nodesize 16384 is the default value. Do you > intend other value, for example 4096? "future proofing" I suppose - if we up the default, the for loop below may not create a level 1 tree. If we force it smaller than 16K I believe that may mean we can't run this test on some kernels with page size larger than the typical 4k. --Mark -- Mark Fasheh -- To unsubscribe from this list: send the line "unsubscribe fstests" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Mark Fasheh wrote on 2016/04/21 16:53 -0700: > Thank you for the review, comments are below. > > On Wed, Apr 20, 2016 at 09:48:54AM +0900, Satoru Takeuchi wrote: >> On 2016/04/20 7:25, Mark Fasheh wrote: >>> +# Force a small leaf size to make it easier to blow out our root >>> +# subvolume tree >>> +_scratch_mkfs "--nodesize 16384" >> >> nodesize 16384 is the default value. Do you >> intend other value, for example 4096? > > "future proofing" I suppose - if we up the default, the for loop below may > not create a level 1 tree. > > If we force it smaller than 16K I believe that may mean we can't run this > test on some kernels with page size larger than the typical 4k. > --Mark > > > -- > Mark Fasheh > > Sorry for the late reply. Unfortunately, for system with 64K page size, it will fail(mount and mkfs) if we use 16K nodesize. IIRC, like some other btrfs qgroup test case, we use 64K nodesize as the safest nodesize. And for level 1 tree create, the idea is to use inline file extents to rapidly create level 1 tree. 16 4K files should create a level 1 tree. Although in this case, max_inline=4096 would be added to mount option though. Thanks, Qu -- To unsubscribe from this list: send the line "unsubscribe fstests" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Apr 22, 2016 at 08:26:33AM +0800, Qu Wenruo wrote: > > > Mark Fasheh wrote on 2016/04/21 16:53 -0700: > >Thank you for the review, comments are below. > > > >On Wed, Apr 20, 2016 at 09:48:54AM +0900, Satoru Takeuchi wrote: > >>On 2016/04/20 7:25, Mark Fasheh wrote: > >>>+# Force a small leaf size to make it easier to blow out our root > >>>+# subvolume tree > >>>+_scratch_mkfs "--nodesize 16384" > >> > >>nodesize 16384 is the default value. Do you > >>intend other value, for example 4096? > > > >"future proofing" I suppose - if we up the default, the for loop below may > >not create a level 1 tree. > > > >If we force it smaller than 16K I believe that may mean we can't run this > >test on some kernels with page size larger than the typical 4k. > > --Mark > > > > > >-- > >Mark Fasheh > > > > > > Sorry for the late reply. > > Unfortunately, for system with 64K page size, it will fail(mount and > mkfs) if we use 16K nodesize. > > IIRC, like some other btrfs qgroup test case, we use 64K nodesize as > the safest nodesize. > > And for level 1 tree create, the idea is to use inline file extents > to rapidly create level 1 tree. > > 16 4K files should create a level 1 tree. > Although in this case, max_inline=4096 would be added to mount > option though. That all sounds good, thanks. The only thing about filling it completely with inline extents though is that we should be exercising qgroups a little harder. But maybe we can blow out the tree with inline extents and then add some actual data extents after that. --Mark -- Mark Fasheh -- To unsubscribe from this list: send the line "unsubscribe fstests" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/tests/btrfs/122 b/tests/btrfs/122 new file mode 100755 index 0000000..b7e9e4b --- /dev/null +++ b/tests/btrfs/122 @@ -0,0 +1,88 @@ +#! /bin/bash +# FS QA Test No. btrfs/122 +# +# Test that qgroup counts are valid after snapshot creation. This has +# been broken in btrfs since Linux v4.1 +# +#----------------------------------------------------------------------- +# Copyright (C) 2016 SUSE Linux Products GmbH. All Rights Reserved. +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public License as +# published by the Free Software Foundation. +# +# This program is distributed in the hope that it would be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write the Free Software Foundation, +# Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA +# +#----------------------------------------------------------------------- +# + +seq=`basename $0` +seqres=$RESULT_DIR/$seq +echo "QA output created by $seq" + +here=`pwd` +tmp=/tmp/$$ +status=1 # failure is the default! +trap "_cleanup; exit \$status" 0 1 2 3 15 + +_cleanup() +{ + cd / + rm -f $tmp.* +} + +# get standard environment, filters and checks +. ./common/rc +. ./common/filter + +# remove previous $seqres.full before test +rm -f $seqres.full + +# real QA test starts here +_supported_fs btrfs +_supported_os Linux +_require_scratch + +rm -f $seqres.full + +# Force a small leaf size to make it easier to blow out our root +# subvolume tree +_scratch_mkfs "--nodesize 16384" +_scratch_mount +_run_btrfs_util_prog quota enable $SCRATCH_MNT + +mkdir "$SCRATCH_MNT/snaps" + +# First make some simple snapshots - the bug was initially reproduced like this +_run_btrfs_util_prog subvolume snapshot $SCRATCH_MNT "$SCRATCH_MNT/snaps/empty1" +_run_btrfs_util_prog subvolume snapshot $SCRATCH_MNT "$SCRATCH_MNT/snaps/empty2" + +# This forces the fs tree out past level 0, adding at least one tree +# block which must be properly accounted for when we make our next +# snapshots. +mkdir "$SCRATCH_MNT/data" +for i in `seq 0 640`; do + $XFS_IO_PROG -f -c "pwrite 0 1M" "$SCRATCH_MNT/data/file$i" > /dev/null 2>&1 +done; + +# Snapshot twice. +_run_btrfs_util_prog subvolume snapshot $SCRATCH_MNT "$SCRATCH_MNT/snaps/snap1" +_run_btrfs_util_prog subvolume snapshot $SCRATCH_MNT "$SCRATCH_MNT/snaps/snap2" + +_scratch_unmount + +# generate a qgroup report and look for inconsistent groups +$BTRFS_UTIL_PROG check --qgroup-report $SCRATCH_DEV 2>&1 | \ + grep -q -E "Counts for qgroup.*are different" +if [ $? -ne 0 ]; then + status=0 +fi + +exit diff --git a/tests/btrfs/group b/tests/btrfs/group index 9403daa..f7e8cff 100644 --- a/tests/btrfs/group +++ b/tests/btrfs/group @@ -122,3 +122,4 @@ 119 auto quick snapshot metadata qgroup 120 auto quick snapshot metadata 121 auto quick snapshot qgroup +122 auto quick snapshot qgroup
This has been broken since Linux v4.1. We may have worked out a solution on the btrfs list but in the meantime sending a test to expose the issue seems like a good idea. Signed-off-by: Mark Fasheh <mfasheh@suse.de> --- tests/btrfs/122 | 88 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ tests/btrfs/group | 1 + 2 files changed, 89 insertions(+) create mode 100755 tests/btrfs/122