Message ID | 20230602082759.132364-1-wqu@suse.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | btrfs/106: avoid hard coded output to handle different page sizes | expand |
On Fri, Jun 2, 2023 at 9:29 AM Qu Wenruo <wqu@suse.com> wrote: > > [BUG] > Test case btrfs/106 is known to fail if the system has a page size other > than 4K. > > This test case can fail like this: > > btrfs/106 5s ... - output mismatch (see ~/xfstests-dev/results//btrfs/106.out.bad) > --- tests/btrfs/106.out 2022-11-24 19:53:53.140469437 +0800 > +++ ~/xfstests-dev/results//btrfs/106.out.bad 2023-06-02 16:12:57.014273249 +0800 > @@ -5,19 +5,19 @@ > File contents before unmount: > 0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa > * > -40 > +1000 > File contents after remount: > 0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa > ... > (Run 'diff -u ~/xfstests-dev/tests/btrfs/106.out /home/adam/xfstests-dev/results//btrfs/106.out.bad' to see the entire diff) > > This is particularly problematic for systems like Aarch64 or PPC64 which > supports 64K page size. > > [CAUSE] > The test case is using page size to calculate the amount of data to > write, thus it doesn't support any page sizes other than 4K. > > [FIX] > Instead of using the golden output, go with md5sum and compare them > before and after the remount. > > The new md5sum would only go into $seqres.full for debugging, not into > golden output to avoid false alerts on different pages sizes. > > Signed-off-by: Qu Wenruo <wqu@suse.com> > --- > tests/btrfs/011 | 2 +- > tests/btrfs/106 | 15 +++++++++++---- > tests/btrfs/106.out | 18 ++---------------- > 3 files changed, 14 insertions(+), 21 deletions(-) > > diff --git a/tests/btrfs/011 b/tests/btrfs/011 > index 852742ee..dd432296 100755 > --- a/tests/btrfs/011 > +++ b/tests/btrfs/011 > @@ -190,7 +190,7 @@ btrfs_replace_test() > # For above finished case, we still output the error message > # but continue the test, or later profiles won't get tested > # at all. > - grep -q canceled $tmp.tmp || echo "btrfs replace status (canceled) failed" > + #grep -q canceled $tmp.tmp || echo "btrfs replace status (canceled) failed" This change to btrfs/011 is not supposed to be here... With this hunk removed: Reviewed-by: Filipe Manana <fdmanana@suse.com> Thanks. > else > if [ "${quick}Q" = "thoroughQ" ]; then > # The thorough test runs around 2 * $wait_time seconds. > diff --git a/tests/btrfs/106 b/tests/btrfs/106 > index db295e70..7496697f 100755 > --- a/tests/btrfs/106 > +++ b/tests/btrfs/106 > @@ -38,8 +38,9 @@ test_clone_and_read_compressed_extent() > $CLONER_PROG -s 0 -d $((16 * $PAGE_SIZE)) -l $((16 * $PAGE_SIZE)) \ > $SCRATCH_MNT/foo $SCRATCH_MNT/foo > > - echo "File contents before unmount:" > - od -t x1 $SCRATCH_MNT/foo | _filter_od > + echo "Hash before unmount:" >> $seqres.full > + old_md5=$(_md5_checksum "$SCRATCH_MNT/foo") > + echo "$old_md5" >> $seqres.full > > # Remount the fs or clear the page cache to trigger the bug in btrfs. > # Because the extent has an uncompressed length that is a multiple of 16 > @@ -52,9 +53,15 @@ test_clone_and_read_compressed_extent() > # correctly. > _scratch_cycle_mount > > - echo "File contents after remount:" > + echo "Hash after remount:" >> $seqres.full > # Must match the digest we got before. > - od -t x1 $SCRATCH_MNT/foo | _filter_od > + new_md5=$(_md5_checksum "$SCRATCH_MNT/foo") > + echo "$new_md5" >> $seqres.full > + if [ "$old_md5" != "$new_md5" ]; then > + echo "Hash mismatches after remount" > + else > + echo "Hash matches after remount" > + fi > } > > echo -e "\nTesting with zlib compression..." > diff --git a/tests/btrfs/106.out b/tests/btrfs/106.out > index 1144a82f..cd69cdd7 100644 > --- a/tests/btrfs/106.out > +++ b/tests/btrfs/106.out > @@ -2,22 +2,8 @@ QA output created by 106 > > Testing with zlib compression... > Pages modified: [0 - 15] > -File contents before unmount: > -0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa > -* > -40 > -File contents after remount: > -0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa > -* > -40 > +Hash matches after remount > > Testing with lzo compression... > Pages modified: [0 - 15] > -File contents before unmount: > -0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa > -* > -40 > -File contents after remount: > -0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa > -* > -40 > +Hash matches after remount > -- > 2.39.0 >
diff --git a/tests/btrfs/011 b/tests/btrfs/011 index 852742ee..dd432296 100755 --- a/tests/btrfs/011 +++ b/tests/btrfs/011 @@ -190,7 +190,7 @@ btrfs_replace_test() # For above finished case, we still output the error message # but continue the test, or later profiles won't get tested # at all. - grep -q canceled $tmp.tmp || echo "btrfs replace status (canceled) failed" + #grep -q canceled $tmp.tmp || echo "btrfs replace status (canceled) failed" else if [ "${quick}Q" = "thoroughQ" ]; then # The thorough test runs around 2 * $wait_time seconds. diff --git a/tests/btrfs/106 b/tests/btrfs/106 index db295e70..7496697f 100755 --- a/tests/btrfs/106 +++ b/tests/btrfs/106 @@ -38,8 +38,9 @@ test_clone_and_read_compressed_extent() $CLONER_PROG -s 0 -d $((16 * $PAGE_SIZE)) -l $((16 * $PAGE_SIZE)) \ $SCRATCH_MNT/foo $SCRATCH_MNT/foo - echo "File contents before unmount:" - od -t x1 $SCRATCH_MNT/foo | _filter_od + echo "Hash before unmount:" >> $seqres.full + old_md5=$(_md5_checksum "$SCRATCH_MNT/foo") + echo "$old_md5" >> $seqres.full # Remount the fs or clear the page cache to trigger the bug in btrfs. # Because the extent has an uncompressed length that is a multiple of 16 @@ -52,9 +53,15 @@ test_clone_and_read_compressed_extent() # correctly. _scratch_cycle_mount - echo "File contents after remount:" + echo "Hash after remount:" >> $seqres.full # Must match the digest we got before. - od -t x1 $SCRATCH_MNT/foo | _filter_od + new_md5=$(_md5_checksum "$SCRATCH_MNT/foo") + echo "$new_md5" >> $seqres.full + if [ "$old_md5" != "$new_md5" ]; then + echo "Hash mismatches after remount" + else + echo "Hash matches after remount" + fi } echo -e "\nTesting with zlib compression..." diff --git a/tests/btrfs/106.out b/tests/btrfs/106.out index 1144a82f..cd69cdd7 100644 --- a/tests/btrfs/106.out +++ b/tests/btrfs/106.out @@ -2,22 +2,8 @@ QA output created by 106 Testing with zlib compression... Pages modified: [0 - 15] -File contents before unmount: -0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa -* -40 -File contents after remount: -0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa -* -40 +Hash matches after remount Testing with lzo compression... Pages modified: [0 - 15] -File contents before unmount: -0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa -* -40 -File contents after remount: -0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa -* -40 +Hash matches after remount
[BUG] Test case btrfs/106 is known to fail if the system has a page size other than 4K. This test case can fail like this: btrfs/106 5s ... - output mismatch (see ~/xfstests-dev/results//btrfs/106.out.bad) --- tests/btrfs/106.out 2022-11-24 19:53:53.140469437 +0800 +++ ~/xfstests-dev/results//btrfs/106.out.bad 2023-06-02 16:12:57.014273249 +0800 @@ -5,19 +5,19 @@ File contents before unmount: 0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa * -40 +1000 File contents after remount: 0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa ... (Run 'diff -u ~/xfstests-dev/tests/btrfs/106.out /home/adam/xfstests-dev/results//btrfs/106.out.bad' to see the entire diff) This is particularly problematic for systems like Aarch64 or PPC64 which supports 64K page size. [CAUSE] The test case is using page size to calculate the amount of data to write, thus it doesn't support any page sizes other than 4K. [FIX] Instead of using the golden output, go with md5sum and compare them before and after the remount. The new md5sum would only go into $seqres.full for debugging, not into golden output to avoid false alerts on different pages sizes. Signed-off-by: Qu Wenruo <wqu@suse.com> --- tests/btrfs/011 | 2 +- tests/btrfs/106 | 15 +++++++++++---- tests/btrfs/106.out | 18 ++---------------- 3 files changed, 14 insertions(+), 21 deletions(-)