Message ID | 1412619830-23088-1-git-send-email-paul.paulson@seagate.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, Oct 06, 2014 at 01:23:50PM -0500, Paul Paulson wrote: > The mkfs command fails to create ext4 filesystems on partition sizes > greater than 1998080 MiB when using 1024 byte blocks and the default > calculation for the number of inodes reserved for the filesystem. This is what the MKFS_OPTIONS field is for. We don't usually try to work around specific perculiarities of specific filesystem configs in individual tests. > The following error message is produced when the maximum number of > inodes is exceeded: > > "Cannot create filesystem with requested number of inodes while > setting up superblock" > > The generic/017 test was modified to skip the 1K block size test > for partitions with an inode count that exceeds the maximum. Oh, I thought we got rid of the multiple block size loop in that test. Hmm - maybe I missed picking up that patch from Lucas after we discussed it. I'll go back and pick it up, and then you won't have this problem when testing default filesystem configs. > diff --git a/tests/generic/017 b/tests/generic/017 > index 13b7254..eb38d4d 100755 > --- a/tests/generic/017 > +++ b/tests/generic/017 > @@ -49,8 +49,16 @@ _do_die_on_error=y > testfile=$SCRATCH_MNT/$seq.$$ > BLOCKS=10240 > > -for (( BSIZE = 1024; BSIZE <= 4096; BSIZE *= 2 )); do > +MAX_INODE_COUNT_1K=127877120 > +inode_count=`$TUNE2FS_PROG -l $SCRATCH_DEV | awk '/Inode count:/ { print $3 }'` > +initial_bsize=$(($inode_count <= $MAX_INODE_COUNT_1K ? 1024 : 2048)) FWIW, that'll break every filesystem type other than ext4. Cheers, Dave.
On Mon, Oct 06, 2014 at 01:23:50PM -0500, Paul Paulson wrote: > The mkfs command fails to create ext4 filesystems on partition sizes > greater than 1998080 MiB when using 1024 byte blocks and the default > calculation for the number of inodes reserved for the filesystem. I've never noticed a problem because creating a partition that large makes the xfstests runs take a long, long time. I typically use a 5 GB or 20GB partition. Is there a particular reason why you are trying to do test ext4 using a 1k blocksize for a 2T file system? We can fix mke2fs so it doesn't fail when you create a > 2T file system with a 1k block size, but the bigger question in my mind is why anyone would ever want to do that? - Ted -- To unsubscribe from this list: send the line "unsubscribe fstests" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Oct 6, 2014 at 7:53 PM, Theodore Ts'o <tytso@mit.edu> wrote: > > On Mon, Oct 06, 2014 at 01:23:50PM -0500, Paul Paulson wrote: > > The mkfs command fails to create ext4 filesystems on partition sizes > > greater than 1998080 MiB when using 1024 byte blocks and the default > > calculation for the number of inodes reserved for the filesystem. > > I've never noticed a problem because creating a partition that large > makes the xfstests runs take a long, long time. I typically use a 5 > GB or 20GB partition. Is there a particular reason why you are trying > to do test ext4 using a 1k blocksize for a 2T file system? We can fix > mke2fs so it doesn't fail when you create a > 2T file system with a 1k > block size, but the bigger question in my mind is why anyone would > ever want to do that? > > - Ted We'd like to run the full test suite using maximum partition sizes on SMR drives for functional and performance evaluation purposes. Since drive capacities are increasing so rapidly it would be nice if mke2fs would support filesystems up to the maximum configurations specified in the Ext4_Disk_Layout document using default filesystem configs. For example, the 127877120 inode limit that we ran into is only 3% of the number of inodes specified in the document (2^32 inodes in a 4 TiB filesystem with 1KiB block sizes for 32-bit mode). -- To unsubscribe from this list: send the line "unsubscribe fstests" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
>> diff --git a/tests/generic/017 b/tests/generic/017 >> index 13b7254..eb38d4d 100755 >> --- a/tests/generic/017 >> +++ b/tests/generic/017 >> @@ -49,8 +49,16 @@ _do_die_on_error=y >> testfile=$SCRATCH_MNT/$seq.$$ >> BLOCKS=10240 >> >> -for (( BSIZE = 1024; BSIZE <= 4096; BSIZE *= 2 )); do >> +MAX_INODE_COUNT_1K=127877120 >> +inode_count=`$TUNE2FS_PROG -l $SCRATCH_DEV | awk '/Inode count:/ { print $3 }'` >> +initial_bsize=$(($inode_count <= $MAX_INODE_COUNT_1K ? 1024 : 2048)) > > FWIW, that'll break every filesystem type other than ext4. Yes, that was an oversight on my part. The initial_bsize calculation should have taken filesystem type into account. -- To unsubscribe from this list: send the line "unsubscribe fstests" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Oct 07, 2014 at 01:12:51PM -0500, Paul Paulson wrote: > We'd like to run the full test suite using maximum partition sizes on > SMR drives for functional and performance evaluation purposes. Since > drive capacities are increasing so rapidly it would be nice if mke2fs > would support filesystems up to the maximum configurations specified > in the Ext4_Disk_Layout document using default filesystem configs. For > example, the 127877120 inode limit that we ran into is only 3% of the > number of inodes specified in the document (2^32 inodes in a 4 TiB > filesystem with 1KiB block sizes for 32-bit mode). Sure, but the default file system configs don't include 1k block sizes. There really is only one reason that I care about the 1k block size --- it's to make it easy to validate on an x86 architecture what happens when a file system with a default 4k block size is mounted on an architectures such as PowerPC or Itanium which has a page size of 16k or 64k. That is, to test the case where block size < page size. But we really don't encourage people use a 1k block size in production. And while it would make sense from a performance point of view to use a 16k or 64k block size file system on a PowerPC or Itanium system, people who care about making their file system portable across PowerPC and x86 (for example) will need to use a 4k block file system (since Linux doesn't support block size > page size). So using a 1k block file system on a terabyte file system is neither the default nor a sane thing to do. I'll look into making mke2fs handle this case more smoothly, but it's not something that I consider a high priority or something I would encourage as a realstic production use case. Cheers, - Ted -- To unsubscribe from this list: send the line "unsubscribe fstests" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Oct 7, 2014 at 9:11 PM, Theodore Ts'o <tytso@mit.edu> wrote: > On Tue, Oct 07, 2014 at 01:12:51PM -0500, Paul Paulson wrote: >> We'd like to run the full test suite using maximum partition sizes on >> SMR drives for functional and performance evaluation purposes. Since >> drive capacities are increasing so rapidly it would be nice if mke2fs >> would support filesystems up to the maximum configurations specified >> in the Ext4_Disk_Layout document using default filesystem configs. For >> example, the 127877120 inode limit that we ran into is only 3% of the >> number of inodes specified in the document (2^32 inodes in a 4 TiB >> filesystem with 1KiB block sizes for 32-bit mode). > > Sure, but the default file system configs don't include 1k block > sizes. There really is only one reason that I care about the 1k block > size --- it's to make it easy to validate on an x86 architecture what > happens when a file system with a default 4k block size is mounted on > an architectures such as PowerPC or Itanium which has a page size of > 16k or 64k. That is, to test the case where block size < page size. > > But we really don't encourage people use a 1k block size in > production. And while it would make sense from a performance point of > view to use a 16k or 64k block size file system on a PowerPC or > Itanium system, people who care about making their file system > portable across PowerPC and x86 (for example) will need to use a 4k > block file system (since Linux doesn't support block size > page > size). > > So using a 1k block file system on a terabyte file system is neither > the default nor a sane thing to do. I'll look into making mke2fs > handle this case more smoothly, but it's not something that I consider > a high priority or something I would encourage as a realstic > production use case. Thank you for your informative explanation. It sounds like Dave will bring in the patch to eliminate the multiple block size loop so we'll just wait for that. -- To unsubscribe from this list: send the line "unsubscribe fstests" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/common/config b/common/config index d68d4d0..824985e 100644 --- a/common/config +++ b/common/config @@ -184,6 +184,7 @@ export LOGGER_PROG="`set_prog_path logger`" export DBENCH_PROG="`set_prog_path dbench`" export DMSETUP_PROG="`set_prog_path dmsetup`" export WIPEFS_PROG="`set_prog_path wipefs`" +export TUNE2FS_PROG="`set_prog_path tune2fs`" # Generate a comparable xfsprogs version number in the form of # major * 10000 + minor * 100 + release diff --git a/tests/generic/017 b/tests/generic/017 index 13b7254..eb38d4d 100755 --- a/tests/generic/017 +++ b/tests/generic/017 @@ -49,8 +49,16 @@ _do_die_on_error=y testfile=$SCRATCH_MNT/$seq.$$ BLOCKS=10240 -for (( BSIZE = 1024; BSIZE <= 4096; BSIZE *= 2 )); do +MAX_INODE_COUNT_1K=127877120 +inode_count=`$TUNE2FS_PROG -l $SCRATCH_DEV | awk '/Inode count:/ { print $3 }'` +initial_bsize=$(($inode_count <= $MAX_INODE_COUNT_1K ? 1024 : 2048)) +iterations=0 +passcount=0 + +for (( BSIZE = $initial_bsize; BSIZE <= 4096; BSIZE *= 2 )); do + + let iterations++ length=$(($BLOCKS * $BSIZE)) case $FSTYP in xfs) @@ -74,7 +82,10 @@ for (( BSIZE = 1024; BSIZE <= 4096; BSIZE *= 2 )); do done # Check if 80 extents are present - $XFS_IO_PROG -c "fiemap -v" $testfile | grep "^ *[0-9]*:" |wc -l + extents=`$XFS_IO_PROG -c "fiemap -v" $testfile | grep "^ *[0-9]*:" | wc -l` + if [ $extents -eq 80 ]; then + let passcount++ + fi _check_scratch_fs if [ $? -ne 0 ]; then @@ -85,6 +96,12 @@ for (( BSIZE = 1024; BSIZE <= 4096; BSIZE *= 2 )); do umount $SCRATCH_MNT done +if [ $iterations -gt 0 -a $passcount -eq $iterations ]; then + echo pass +else + echo fail +fi + # success, all done status=0 exit diff --git a/tests/generic/017.out b/tests/generic/017.out index cc524ac..008ac18 100644 --- a/tests/generic/017.out +++ b/tests/generic/017.out @@ -1,4 +1,2 @@ QA output created by 017 -80 -80 -80 +pass