diff mbox

[1/4] xfs/104: log size too small for 4k sector drives

Message ID 1424818479-10083-2-git-send-email-david@fromorbit.com (mailing list archive)
State New, archived
Headers show

Commit Message

Dave Chinner Feb. 24, 2015, 10:54 p.m. UTC
From: Dave Chinner <dchinner@redhat.com>

xfs/104, xfs/119, xfs/291 and xfs/297 have small fixed log sizes. A
recent change to the kernel ramdisk changed it's physical sector
size from 512B to 4kB, and this results in mkfs calculating a log
size larger than the fixed test size and hence the tests fail.

Change the log size to a larger size that works with 4k sectors, and
also increase the size of the filesystem being created so that the
amount of data space in the filesystem does not change and hence
does not perturb the rest of the test.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 tests/xfs/104     |  8 ++++----
 tests/xfs/104.out | 46 +++++++++++++++++++++++-----------------------
 tests/xfs/119     |  2 +-
 tests/xfs/291     |  2 +-
 tests/xfs/297     |  2 +-
 5 files changed, 30 insertions(+), 30 deletions(-)

Comments

Brian Foster Feb. 25, 2015, 4:11 p.m. UTC | #1
On Wed, Feb 25, 2015 at 09:54:36AM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> xfs/104, xfs/119, xfs/291 and xfs/297 have small fixed log sizes. A
> recent change to the kernel ramdisk changed it's physical sector
> size from 512B to 4kB, and this results in mkfs calculating a log
> size larger than the fixed test size and hence the tests fail.
> 
> Change the log size to a larger size that works with 4k sectors, and
> also increase the size of the filesystem being created so that the
> amount of data space in the filesystem does not change and hence
> does not perturb the rest of the test.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---

Well for some reason I can't mount a ramdisk on the current tot to test
this. In fact, I can't mount _anything_ after the ramdisk mount attempt.
The mount actually reports success too, but there's nothing there... :/

# modprobe brd
# mkfs.xfs -f /dev/ram0 
meta-data=/dev/ram0              isize=256    agcount=1, agsize=4096
blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=4096, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=1605, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
# mount /dev/ram0 /mnt/
# mount | grep mnt
# umount  /mnt/
umount: /mnt/: not mounted

... and then I can't even mount my normal scratch device until after a
reboot:

# mount /dev/test/scratch /mnt/
# mount | grep mnt
# umount  /mnt/
umount: /mnt/: not mounted

I see some general flakiness with loop devices as well (re: patch 3/4).
Anyways... until I get a chance to look at that, a couple nits to
follow. Otherwise this looks Ok to me.

>  tests/xfs/104     |  8 ++++----
>  tests/xfs/104.out | 46 +++++++++++++++++++++++-----------------------
>  tests/xfs/119     |  2 +-
>  tests/xfs/291     |  2 +-
>  tests/xfs/297     |  2 +-
>  5 files changed, 30 insertions(+), 30 deletions(-)
> 
> diff --git a/tests/xfs/104 b/tests/xfs/104
> index 69fcc69..ca2ae21 100755
> --- a/tests/xfs/104
> +++ b/tests/xfs/104
> @@ -81,10 +81,10 @@ modsize=`expr   4 \* $incsize`	# pause after this many increments
>  [ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
>  
>  nags=4
> -size=`expr 120 \* 1048576`	# 120 megabytes initially
> +size=`expr 125 \* 1048576`	# 120 megabytes initially

The comment is wrong now.

>  sizeb=`expr $size / $dbsize`	# in data blocks
>  echo "*** creating scratch filesystem"
> -_create_scratch -lsize=5m -dsize=${size} -dagcount=${nags}
> +_create_scratch -lsize=10m -dsize=${size} -dagcount=${nags}
>  
>  fillsize=`expr 110 \* 1048576`	# 110 megabytes of filling
>  echo "*** using some initial space on scratch filesystem"
> @@ -95,13 +95,13 @@ _fill_scratch $fillsize
>  # Kick off more stress threads on each iteration, grow; repeat.
>  #
>  while [ $size -le $endsize ]; do
> -	echo "*** stressing a ${size} byte filesystem"
> +	echo "*** stressing filesystem"
>  	echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
>  	_stress_scratch
>  	sleep 1
>  	size=`expr $size + $incsize`
>  	sizeb=`expr $size / $dbsize`	# in data blocks
> -	echo "*** growing to a ${size} byte filesystem"
> +	echo "*** growing filesystem"
>  	echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
>  	xfs_growfs -D ${sizeb} $SCRATCH_MNT \
>  		| tee -a $seqres.full | _filter_mkfs 2>$tmp.growfs
> diff --git a/tests/xfs/104.out b/tests/xfs/104.out
> index f237e5e..de6c7f2 100644
> --- a/tests/xfs/104.out
> +++ b/tests/xfs/104.out
> @@ -15,8 +15,8 @@ log      =LDEV bsize=XXX blocks=XXX
>  realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
>  *** mount
>  *** using some initial space on scratch filesystem
> -*** stressing a 125829120 byte filesystem
> -*** growing to a 169869312 byte filesystem
> +*** stressing filesystem
> +*** growing filesystem
>  meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
>  data     = bsize=XXX blocks=XXX, imaxpct=PCT
>           = sunit=XXX swidth=XXX, unwritten=X
> @@ -25,8 +25,8 @@ log      =LDEV bsize=XXX blocks=XXX
>  realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
>  AGCOUNT=4
>  
> -*** stressing a 169869312 byte filesystem
> -*** growing to a 213909504 byte filesystem
> +*** stressing filesystem
> +*** growing filesystem
>  meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
>  data     = bsize=XXX blocks=XXX, imaxpct=PCT
>           = sunit=XXX swidth=XXX, unwritten=X
> @@ -35,8 +35,8 @@ log      =LDEV bsize=XXX blocks=XXX
>  realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
>  AGCOUNT=6
>  
> -*** stressing a 213909504 byte filesystem
> -*** growing to a 257949696 byte filesystem
> +*** stressing filesystem
> +*** growing filesystem
>  meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
>  data     = bsize=XXX blocks=XXX, imaxpct=PCT
>           = sunit=XXX swidth=XXX, unwritten=X
> @@ -45,8 +45,8 @@ log      =LDEV bsize=XXX blocks=XXX
>  realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
>  AGCOUNT=7
>  
> -*** stressing a 257949696 byte filesystem
> -*** growing to a 301989888 byte filesystem
> +*** stressing filesystem
> +*** growing filesystem
>  meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
>  data     = bsize=XXX blocks=XXX, imaxpct=PCT
>           = sunit=XXX swidth=XXX, unwritten=X
> @@ -55,8 +55,8 @@ log      =LDEV bsize=XXX blocks=XXX
>  realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
>  AGCOUNT=9
>  
> -*** stressing a 301989888 byte filesystem
> -*** growing to a 346030080 byte filesystem
> +*** stressing filesystem
> +*** growing filesystem
>  meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
>  data     = bsize=XXX blocks=XXX, imaxpct=PCT
>           = sunit=XXX swidth=XXX, unwritten=X
> @@ -65,8 +65,8 @@ log      =LDEV bsize=XXX blocks=XXX
>  realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
>  AGCOUNT=10
>  
> -*** stressing a 346030080 byte filesystem
> -*** growing to a 390070272 byte filesystem
> +*** stressing filesystem
> +*** growing filesystem
>  meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
>  data     = bsize=XXX blocks=XXX, imaxpct=PCT
>           = sunit=XXX swidth=XXX, unwritten=X
> @@ -75,8 +75,8 @@ log      =LDEV bsize=XXX blocks=XXX
>  realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
>  AGCOUNT=11
>  
> -*** stressing a 390070272 byte filesystem
> -*** growing to a 434110464 byte filesystem
> +*** stressing filesystem
> +*** growing filesystem
>  meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
>  data     = bsize=XXX blocks=XXX, imaxpct=PCT
>           = sunit=XXX swidth=XXX, unwritten=X
> @@ -85,8 +85,8 @@ log      =LDEV bsize=XXX blocks=XXX
>  realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
>  AGCOUNT=13
>  
> -*** stressing a 434110464 byte filesystem
> -*** growing to a 478150656 byte filesystem
> +*** stressing filesystem
> +*** growing filesystem
>  meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
>  data     = bsize=XXX blocks=XXX, imaxpct=PCT
>           = sunit=XXX swidth=XXX, unwritten=X
> @@ -95,18 +95,18 @@ log      =LDEV bsize=XXX blocks=XXX
>  realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
>  AGCOUNT=14
>  
> -*** stressing a 478150656 byte filesystem
> -*** growing to a 522190848 byte filesystem
> +*** stressing filesystem
> +*** growing filesystem
>  meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
>  data     = bsize=XXX blocks=XXX, imaxpct=PCT
>           = sunit=XXX swidth=XXX, unwritten=X
>  naming   =VERN bsize=XXX
>  log      =LDEV bsize=XXX blocks=XXX
>  realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
> -AGCOUNT=16
> +AGCOUNT=15
>  
> -*** stressing a 522190848 byte filesystem
> -*** growing to a 566231040 byte filesystem
> +*** stressing filesystem
> +*** growing filesystem
>  meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
>  data     = bsize=XXX blocks=XXX, imaxpct=PCT
>           = sunit=XXX swidth=XXX, unwritten=X
> @@ -115,8 +115,8 @@ log      =LDEV bsize=XXX blocks=XXX
>  realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
>  AGCOUNT=17
>  
> -*** stressing a 566231040 byte filesystem
> -*** growing to a 610271232 byte filesystem
> +*** stressing filesystem
> +*** growing filesystem
>  meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
>  data     = bsize=XXX blocks=XXX, imaxpct=PCT
>           = sunit=XXX swidth=XXX, unwritten=X
> diff --git a/tests/xfs/119 b/tests/xfs/119
> index c7c46d9..490495b 100755
> --- a/tests/xfs/119
> +++ b/tests/xfs/119
> @@ -54,7 +54,7 @@ _require_scratch
>  # this may hang
>  sync
>  
> -export MKFS_OPTIONS="-l version=2,size=1200b,su=64k" 
> +export MKFS_OPTIONS="-l version=2,size=2500b,su=64k" 

Trailing space here.

Brian

>  export MOUNT_OPTIONS="-o logbsize=64k"
>  _scratch_mkfs_xfs >/dev/null
>  
> diff --git a/tests/xfs/291 b/tests/xfs/291
> index fbf9c51..c226e65 100755
> --- a/tests/xfs/291
> +++ b/tests/xfs/291
> @@ -46,7 +46,7 @@ _supported_os IRIX Linux
>  # real QA test starts here
>  rm -f $seqres.full
>  _require_scratch
> -_scratch_mkfs_xfs -n size=16k -l size=5m -d size=128m >> $seqres.full 2>&1
> +_scratch_mkfs_xfs -n size=16k -l size=10m -d size=133m >> $seqres.full 2>&1
>  _scratch_mount
>  
>  # First we cause very badly fragmented freespace, then
> diff --git a/tests/xfs/297 b/tests/xfs/297
> index 1cdbbb9..25b597e 100755
> --- a/tests/xfs/297
> +++ b/tests/xfs/297
> @@ -50,7 +50,7 @@ _require_scratch
>  _require_freeze
>  
>  rm -f $seqres.full
> -_scratch_mkfs_xfs -d agcount=16,su=256k,sw=12 -l su=256k,size=2560b >/dev/null 2>&1
> +_scratch_mkfs_xfs -d agcount=16,su=256k,sw=12 -l su=256k,size=5120b >/dev/null 2>&1
>  _scratch_mount >/dev/null 2>&1
>  
>  STRESS_DIR="$SCRATCH_MNT/testdir"
> -- 
> 2.0.0
> 
> --
> To unsubscribe from this list: send the line "unsubscribe fstests" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Feb. 25, 2015, 10:32 p.m. UTC | #2
[cc linux-fsdevel, Boaz and others]

On Wed, Feb 25, 2015 at 11:11:51AM -0500, Brian Foster wrote:
> On Wed, Feb 25, 2015 at 09:54:36AM +1100, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@redhat.com>
> > 
> > xfs/104, xfs/119, xfs/291 and xfs/297 have small fixed log sizes. A
> > recent change to the kernel ramdisk changed it's physical sector
> > size from 512B to 4kB, and this results in mkfs calculating a log
> > size larger than the fixed test size and hence the tests fail.
> > 
> > Change the log size to a larger size that works with 4k sectors, and
> > also increase the size of the filesystem being created so that the
> > amount of data space in the filesystem does not change and hence
> > does not perturb the rest of the test.
> > 
> > Signed-off-by: Dave Chinner <dchinner@redhat.com>
> > ---
> 
> Well for some reason I can't mount a ramdisk on the current tot to test
> this. In fact, I can't mount _anything_ after the ramdisk mount attempt.
> The mount actually reports success too, but there's nothing there... :/
> 
> # modprobe brd
> # mkfs.xfs -f /dev/ram0 
> meta-data=/dev/ram0              isize=256    agcount=1, agsize=4096
> blks
>          =                       sectsz=4096  attr=2, projid32bit=1
>          =                       crc=0        finobt=0
> data     =                       bsize=4096   blocks=4096, imaxpct=25
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> log      =internal log           bsize=4096   blocks=1605, version=2
>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> # mount /dev/ram0 /mnt/
> # mount | grep mnt
> # umount  /mnt/
> umount: /mnt/: not mounted
> 
> ... and then I can't even mount my normal scratch device until after a
> reboot:
> 
> # mount /dev/test/scratch /mnt/
> # mount | grep mnt
> # umount  /mnt/
> umount: /mnt/: not mounted

Ok, so that's just plain broken. What's in dmesg?

As it is, I'm seeing plenty of weirdness in 4.0-rc1 on ramdisks as
well. Apart from the change to 4k physical sector size causing all
sorts of chaos with xfstests results due to it changing mkfs.xfs
behaviour, I'm also seeing this happen randomly:

....
Feb 25 11:48:35 test4 dave: run xfstest generic/083
Feb 25 11:48:37 test4 kernel: [ 8732.316223] XFS (ram1): Mounting V5 Filesystem
Feb 25 11:48:37 test4 kernel: [ 8732.318904] XFS (ram1): Ending clean mount
Feb 25 11:48:40 test4 kernel: [ 8735.871968] XFS (ram1): Unmounting Filesystem
Feb 25 11:48:40 test4 kernel: [ 8735.930160]  ram1: [POWERTEC] p1 p2 p3 p4 p5 p6 p7
Feb 25 11:48:40 test4 kernel: [ 8735.932081] ram1: p2 start 3158599292 is beyond EOD, truncated
Feb 25 11:48:40 test4 kernel: [ 8735.933983] ram1: p3 size 1627389952 extends beyond EOD, truncated
Feb 25 11:48:40 test4 kernel: [ 8735.936177] ram1: p4 size 1158021120 extends beyond EOD, truncated
Feb 25 11:48:40 test4 kernel: [ 8735.938269] ram1: p5 start 50924556 is beyond EOD, truncated
Feb 25 11:48:40 test4 kernel: [ 8735.940103] ram1: p6 size 67108864 extends beyond EOD, truncated
Feb 25 11:48:40 test4 kernel: [ 8735.942101] ram1: p7 start 4294967295 is beyond EOD, truncated
Feb 25 11:48:40 test4 dave: run xfstest generic/088
....

Something is causing partition rescans on ram devices that don't
have partitions, and this is new behaviour. Boaz, your commit
937af5ecd05 ("brd: Fix all partitions BUGs") seems the likely cause
of this problem I'm seeing - looks likea behaviour regression to
me as no other block device I have on any machine running the
same kernel throws these strange warnings from partition probing...

Cheers,

Dave.
Brian Foster Feb. 25, 2015, 11:31 p.m. UTC | #3
On Thu, Feb 26, 2015 at 09:32:48AM +1100, Dave Chinner wrote:
> [cc linux-fsdevel, Boaz and others]
> 
> On Wed, Feb 25, 2015 at 11:11:51AM -0500, Brian Foster wrote:
> > On Wed, Feb 25, 2015 at 09:54:36AM +1100, Dave Chinner wrote:
> > > From: Dave Chinner <dchinner@redhat.com>
> > > 
> > > xfs/104, xfs/119, xfs/291 and xfs/297 have small fixed log sizes. A
> > > recent change to the kernel ramdisk changed it's physical sector
> > > size from 512B to 4kB, and this results in mkfs calculating a log
> > > size larger than the fixed test size and hence the tests fail.
> > > 
> > > Change the log size to a larger size that works with 4k sectors, and
> > > also increase the size of the filesystem being created so that the
> > > amount of data space in the filesystem does not change and hence
> > > does not perturb the rest of the test.
> > > 
> > > Signed-off-by: Dave Chinner <dchinner@redhat.com>
> > > ---
> > 
> > Well for some reason I can't mount a ramdisk on the current tot to test
> > this. In fact, I can't mount _anything_ after the ramdisk mount attempt.
> > The mount actually reports success too, but there's nothing there... :/
> > 
> > # modprobe brd
> > # mkfs.xfs -f /dev/ram0 
> > meta-data=/dev/ram0              isize=256    agcount=1, agsize=4096
> > blks
> >          =                       sectsz=4096  attr=2, projid32bit=1
> >          =                       crc=0        finobt=0
> > data     =                       bsize=4096   blocks=4096, imaxpct=25
> >          =                       sunit=0      swidth=0 blks
> > naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> > log      =internal log           bsize=4096   blocks=1605, version=2
> >          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> > realtime =none                   extsz=4096   blocks=0, rtextents=0
> > # mount /dev/ram0 /mnt/
> > # mount | grep mnt
> > # umount  /mnt/
> > umount: /mnt/: not mounted
> > 
> > ... and then I can't even mount my normal scratch device until after a
> > reboot:
> > 
> > # mount /dev/test/scratch /mnt/
> > # mount | grep mnt
> > # umount  /mnt/
> > umount: /mnt/: not mounted
> 
> Ok, so that's just plain broken. What's in dmesg?
> 

Once I got back to this I found that for some reason systemd is
immediately invoking a umount on the mount. :/ No idea why or how to
stop it, but if I do something like this:

mount /dev/ram0 /mnt; cd /mnt

... I can occasionally win the race and get systemd to spin in a
umount() cycle trying to undo the mount. I haven't gone back to confirm
it's the same behavior with the normal devices at that point, but I
suspect it is, perhaps due to getting into some kind of bad state.

So fyi that this particular problem doesn't appear to be directly kernel
related...

Brian

> As it is, I'm seeing plenty of weirdness in 4.0-rc1 on ramdisks as
> well. Apart from the change to 4k physical sector size causing all
> sorts of chaos with xfstests results due to it changing mkfs.xfs
> behaviour, I'm also seeing this happen randomly:
> 
> ....
> Feb 25 11:48:35 test4 dave: run xfstest generic/083
> Feb 25 11:48:37 test4 kernel: [ 8732.316223] XFS (ram1): Mounting V5 Filesystem
> Feb 25 11:48:37 test4 kernel: [ 8732.318904] XFS (ram1): Ending clean mount
> Feb 25 11:48:40 test4 kernel: [ 8735.871968] XFS (ram1): Unmounting Filesystem
> Feb 25 11:48:40 test4 kernel: [ 8735.930160]  ram1: [POWERTEC] p1 p2 p3 p4 p5 p6 p7
> Feb 25 11:48:40 test4 kernel: [ 8735.932081] ram1: p2 start 3158599292 is beyond EOD, truncated
> Feb 25 11:48:40 test4 kernel: [ 8735.933983] ram1: p3 size 1627389952 extends beyond EOD, truncated
> Feb 25 11:48:40 test4 kernel: [ 8735.936177] ram1: p4 size 1158021120 extends beyond EOD, truncated
> Feb 25 11:48:40 test4 kernel: [ 8735.938269] ram1: p5 start 50924556 is beyond EOD, truncated
> Feb 25 11:48:40 test4 kernel: [ 8735.940103] ram1: p6 size 67108864 extends beyond EOD, truncated
> Feb 25 11:48:40 test4 kernel: [ 8735.942101] ram1: p7 start 4294967295 is beyond EOD, truncated
> Feb 25 11:48:40 test4 dave: run xfstest generic/088
> ....
> 
> Something is causing partition rescans on ram devices that don't
> have partitions, and this is new behaviour. Boaz, your commit
> 937af5ecd05 ("brd: Fix all partitions BUGs") seems the likely cause
> of this problem I'm seeing - looks likea behaviour regression to
> me as no other block device I have on any machine running the
> same kernel throws these strange warnings from partition probing...
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Feb. 25, 2015, 11:43 p.m. UTC | #4
On Wed, Feb 25, 2015 at 06:31:15PM -0500, Brian Foster wrote:
> On Thu, Feb 26, 2015 at 09:32:48AM +1100, Dave Chinner wrote:
> > [cc linux-fsdevel, Boaz and others]
> > 
> > On Wed, Feb 25, 2015 at 11:11:51AM -0500, Brian Foster wrote:
> > > On Wed, Feb 25, 2015 at 09:54:36AM +1100, Dave Chinner wrote:
> > > > From: Dave Chinner <dchinner@redhat.com>
> > > > 
> > > > xfs/104, xfs/119, xfs/291 and xfs/297 have small fixed log sizes. A
> > > > recent change to the kernel ramdisk changed it's physical sector
> > > > size from 512B to 4kB, and this results in mkfs calculating a log
> > > > size larger than the fixed test size and hence the tests fail.
> > > > 
> > > > Change the log size to a larger size that works with 4k sectors, and
> > > > also increase the size of the filesystem being created so that the
> > > > amount of data space in the filesystem does not change and hence
> > > > does not perturb the rest of the test.
> > > > 
> > > > Signed-off-by: Dave Chinner <dchinner@redhat.com>
> > > > ---
> > > 
> > > Well for some reason I can't mount a ramdisk on the current tot to test
> > > this. In fact, I can't mount _anything_ after the ramdisk mount attempt.
> > > The mount actually reports success too, but there's nothing there... :/
> > > 
> > > # modprobe brd
> > > # mkfs.xfs -f /dev/ram0 
> > > meta-data=/dev/ram0              isize=256    agcount=1, agsize=4096
> > > blks
> > >          =                       sectsz=4096  attr=2, projid32bit=1
> > >          =                       crc=0        finobt=0
> > > data     =                       bsize=4096   blocks=4096, imaxpct=25
> > >          =                       sunit=0      swidth=0 blks
> > > naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> > > log      =internal log           bsize=4096   blocks=1605, version=2
> > >          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> > > realtime =none                   extsz=4096   blocks=0, rtextents=0
> > > # mount /dev/ram0 /mnt/
> > > # mount | grep mnt
> > > # umount  /mnt/
> > > umount: /mnt/: not mounted
> > > 
> > > ... and then I can't even mount my normal scratch device until after a
> > > reboot:
> > > 
> > > # mount /dev/test/scratch /mnt/
> > > # mount | grep mnt
> > > # umount  /mnt/
> > > umount: /mnt/: not mounted
> > 
> > Ok, so that's just plain broken. What's in dmesg?
> > 
> 
> Once I got back to this I found that for some reason systemd is
> immediately invoking a umount on the mount. :/ No idea why or how to
> stop it, but if I do something like this:
> 
> mount /dev/ram0 /mnt; cd /mnt
> 
> ... I can occasionally win the race and get systemd to spin in a
> umount() cycle trying to undo the mount. I haven't gone back to confirm
> it's the same behavior with the normal devices at that point, but I
> suspect it is, perhaps due to getting into some kind of bad state.
> 
> So fyi that this particular problem doesn't appear to be directly kernel
> related...

It may still be related to the kernel changes  e.g. by triggering
udev events when they didn't previously. The only machine I have
that is triggering the partition probing is also the only test
machine that I have that runs systemd and it didn't have this
problem on 3.19.

Cheers,

Dave.
Boaz Harrosh Feb. 26, 2015, 7:46 a.m. UTC | #5
On 02/26/2015 01:43 AM, Dave Chinner wrote:
> On Wed, Feb 25, 2015 at 06:31:15PM -0500, Brian Foster wrote:
>> On Thu, Feb 26, 2015 at 09:32:48AM +1100, Dave Chinner wrote:
>>> [cc linux-fsdevel, Boaz and others]
>>>
>>> On Wed, Feb 25, 2015 at 11:11:51AM -0500, Brian Foster wrote:
>>>> On Wed, Feb 25, 2015 at 09:54:36AM +1100, Dave Chinner wrote:
>>>>> From: Dave Chinner <dchinner@redhat.com>
>>>>>
>>>>> xfs/104, xfs/119, xfs/291 and xfs/297 have small fixed log sizes. A
>>>>> recent change to the kernel ramdisk changed it's physical sector
>>>>> size from 512B to 4kB, and this results in mkfs calculating a log
>>>>> size larger than the fixed test size and hence the tests fail.
>>>>>
>>>>> Change the log size to a larger size that works with 4k sectors, and
>>>>> also increase the size of the filesystem being created so that the
>>>>> amount of data space in the filesystem does not change and hence
>>>>> does not perturb the rest of the test.
>>>>>
>>>>> Signed-off-by: Dave Chinner <dchinner@redhat.com>
>>>>> ---
>>>>
>>>> Well for some reason I can't mount a ramdisk on the current tot to test
>>>> this. In fact, I can't mount _anything_ after the ramdisk mount attempt.
>>>> The mount actually reports success too, but there's nothing there... :/
>>>>
>>>> # modprobe brd
>>>> # mkfs.xfs -f /dev/ram0 
>>>> meta-data=/dev/ram0              isize=256    agcount=1, agsize=4096
>>>> blks
>>>>          =                       sectsz=4096  attr=2, projid32bit=1
>>>>          =                       crc=0        finobt=0
>>>> data     =                       bsize=4096   blocks=4096, imaxpct=25
>>>>          =                       sunit=0      swidth=0 blks
>>>> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
>>>> log      =internal log           bsize=4096   blocks=1605, version=2
>>>>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
>>>> realtime =none                   extsz=4096   blocks=0, rtextents=0
>>>> # mount /dev/ram0 /mnt/
>>>> # mount | grep mnt
>>>> # umount  /mnt/
>>>> umount: /mnt/: not mounted
>>>>
>>>> ... and then I can't even mount my normal scratch device until after a
>>>> reboot:
>>>>
>>>> # mount /dev/test/scratch /mnt/
>>>> # mount | grep mnt
>>>> # umount  /mnt/
>>>> umount: /mnt/: not mounted
>>>
>>> Ok, so that's just plain broken. What's in dmesg?
>>>
>>
>> Once I got back to this I found that for some reason systemd is
>> immediately invoking a umount on the mount. :/ No idea why or how to
>> stop it, but if I do something like this:
>>
>> mount /dev/ram0 /mnt; cd /mnt
>>
>> ... I can occasionally win the race and get systemd to spin in a
>> umount() cycle trying to undo the mount. I haven't gone back to confirm
>> it's the same behavior with the normal devices at that point, but I
>> suspect it is, perhaps due to getting into some kind of bad state.
>>
>> So fyi that this particular problem doesn't appear to be directly kernel
>> related...
> 
> It may still be related to the kernel changes  e.g. by triggering
> udev events when they didn't previously. The only machine I have
> that is triggering the partition probing is also the only test
> machine that I have that runs systemd and it didn't have this
> problem on 3.19.
> 

Sigh, thanks Dave. Yes you are correct my patch enabled the
udev events, as part of fixing ramdisk with partitions.
This is because if you do not enable them then mount by UUID
and all sort of lsblk and friends do not work.

I did try to test this in all kind of ways, xfstest+ext4
as well, and ran with it on Fedora 20 for a while, sorry
about that.

It looks like the system anticipates that ramdisk "should
not have these events"

I will send a patch ASAP that re-instates the module_parameter
for enabling notification, and leaving the default off. It should
be easy to set the param if one intends to use these utilities.

That said, please do agree with me that there is brokenness in
systemd?

BTW: You also said something about the 4k sectors thing, It looks
like we are pulled in two different directions here. If you will
want to use DAX on ramdisk then you want it on, if you are not
using DAX, and wants to use smaller-then-page_size FS blocks than
you do not want it.

Please advise what we should do? Maybe only do 4k if BLK_DEV_RAM_DAX
is set in Kconfig ?


Sorry for the mess, I'll send a fix ASAP

> Cheers,
> Dave.
> 

Thanks
Boaz

--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Brian Foster Feb. 26, 2015, 5:23 p.m. UTC | #6
On Thu, Feb 26, 2015 at 09:46:27AM +0200, Boaz Harrosh wrote:
> On 02/26/2015 01:43 AM, Dave Chinner wrote:
> > On Wed, Feb 25, 2015 at 06:31:15PM -0500, Brian Foster wrote:
> >> On Thu, Feb 26, 2015 at 09:32:48AM +1100, Dave Chinner wrote:
> >>> [cc linux-fsdevel, Boaz and others]
> >>>
> >>> On Wed, Feb 25, 2015 at 11:11:51AM -0500, Brian Foster wrote:
> >>>> On Wed, Feb 25, 2015 at 09:54:36AM +1100, Dave Chinner wrote:
> >>>>> From: Dave Chinner <dchinner@redhat.com>
> >>>>>
> >>>>> xfs/104, xfs/119, xfs/291 and xfs/297 have small fixed log sizes. A
> >>>>> recent change to the kernel ramdisk changed it's physical sector
> >>>>> size from 512B to 4kB, and this results in mkfs calculating a log
> >>>>> size larger than the fixed test size and hence the tests fail.
> >>>>>
> >>>>> Change the log size to a larger size that works with 4k sectors, and
> >>>>> also increase the size of the filesystem being created so that the
> >>>>> amount of data space in the filesystem does not change and hence
> >>>>> does not perturb the rest of the test.
> >>>>>
> >>>>> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> >>>>> ---
> >>>>
> >>>> Well for some reason I can't mount a ramdisk on the current tot to test
> >>>> this. In fact, I can't mount _anything_ after the ramdisk mount attempt.
> >>>> The mount actually reports success too, but there's nothing there... :/
> >>>>
> >>>> # modprobe brd
> >>>> # mkfs.xfs -f /dev/ram0 
> >>>> meta-data=/dev/ram0              isize=256    agcount=1, agsize=4096
> >>>> blks
> >>>>          =                       sectsz=4096  attr=2, projid32bit=1
> >>>>          =                       crc=0        finobt=0
> >>>> data     =                       bsize=4096   blocks=4096, imaxpct=25
> >>>>          =                       sunit=0      swidth=0 blks
> >>>> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> >>>> log      =internal log           bsize=4096   blocks=1605, version=2
> >>>>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> >>>> realtime =none                   extsz=4096   blocks=0, rtextents=0
> >>>> # mount /dev/ram0 /mnt/
> >>>> # mount | grep mnt
> >>>> # umount  /mnt/
> >>>> umount: /mnt/: not mounted
> >>>>
> >>>> ... and then I can't even mount my normal scratch device until after a
> >>>> reboot:
> >>>>
> >>>> # mount /dev/test/scratch /mnt/
> >>>> # mount | grep mnt
> >>>> # umount  /mnt/
> >>>> umount: /mnt/: not mounted
> >>>
> >>> Ok, so that's just plain broken. What's in dmesg?
> >>>
> >>
> >> Once I got back to this I found that for some reason systemd is
> >> immediately invoking a umount on the mount. :/ No idea why or how to
> >> stop it, but if I do something like this:
> >>
> >> mount /dev/ram0 /mnt; cd /mnt
> >>
> >> ... I can occasionally win the race and get systemd to spin in a
> >> umount() cycle trying to undo the mount. I haven't gone back to confirm
> >> it's the same behavior with the normal devices at that point, but I
> >> suspect it is, perhaps due to getting into some kind of bad state.
> >>
> >> So fyi that this particular problem doesn't appear to be directly kernel
> >> related...
> > 
> > It may still be related to the kernel changes  e.g. by triggering
> > udev events when they didn't previously. The only machine I have
> > that is triggering the partition probing is also the only test
> > machine that I have that runs systemd and it didn't have this
> > problem on 3.19.
> > 
> 
> Sigh, thanks Dave. Yes you are correct my patch enabled the
> udev events, as part of fixing ramdisk with partitions.
> This is because if you do not enable them then mount by UUID
> and all sort of lsblk and friends do not work.
> 
> I did try to test this in all kind of ways, xfstest+ext4
> as well, and ran with it on Fedora 20 for a while, sorry
> about that.
> 
> It looks like the system anticipates that ramdisk "should
> not have these events"
> 
> I will send a patch ASAP that re-instates the module_parameter
> for enabling notification, and leaving the default off. It should
> be easy to set the param if one intends to use these utilities.
> 

Thanks Boaz, but I still see the same behavior with the part_show patch.
It seems to be something that broke in systemd on Fedora between
versions systemd-218 and systemd-219. The latter is broken on a 3.19
kernel as well.

I've filed a systemd bug so we'll see what comes of it from that end:

https://bugzilla.redhat.com/show_bug.cgi?id=1196452

Brian

> That said, please do agree with me that there is brokenness in
> systemd?
> 
> BTW: You also said something about the 4k sectors thing, It looks
> like we are pulled in two different directions here. If you will
> want to use DAX on ramdisk then you want it on, if you are not
> using DAX, and wants to use smaller-then-page_size FS blocks than
> you do not want it.
> 
> Please advise what we should do? Maybe only do 4k if BLK_DEV_RAM_DAX
> is set in Kconfig ?
> 
> 
> Sorry for the mess, I'll send a fix ASAP
> 
> > Cheers,
> > Dave.
> > 
> 
> Thanks
> Boaz
> 
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Feb. 27, 2015, 12:58 a.m. UTC | #7
On Thu, Feb 26, 2015 at 09:46:27AM +0200, Boaz Harrosh wrote:
> On 02/26/2015 01:43 AM, Dave Chinner wrote:
> > On Wed, Feb 25, 2015 at 06:31:15PM -0500, Brian Foster wrote:
> >> On Thu, Feb 26, 2015 at 09:32:48AM +1100, Dave Chinner wrote:
> >>> [cc linux-fsdevel, Boaz and others]
> >>>
> >>> On Wed, Feb 25, 2015 at 11:11:51AM -0500, Brian Foster wrote:
> >>>> On Wed, Feb 25, 2015 at 09:54:36AM +1100, Dave Chinner wrote:
> >>>>> From: Dave Chinner <dchinner@redhat.com>
> >>>>>
> >>>>> xfs/104, xfs/119, xfs/291 and xfs/297 have small fixed log sizes. A
> >>>>> recent change to the kernel ramdisk changed it's physical sector
> >>>>> size from 512B to 4kB, and this results in mkfs calculating a log
> >>>>> size larger than the fixed test size and hence the tests fail.
> >>>>>
> >>>>> Change the log size to a larger size that works with 4k sectors, and
> >>>>> also increase the size of the filesystem being created so that the
> >>>>> amount of data space in the filesystem does not change and hence
> >>>>> does not perturb the rest of the test.
> >>>>>
> >>>>> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> >>>>> ---
> >>>>
> >>>> Well for some reason I can't mount a ramdisk on the current tot to test
> >>>> this. In fact, I can't mount _anything_ after the ramdisk mount attempt.
> >>>> The mount actually reports success too, but there's nothing there... :/
> >>>>
> >>>> # modprobe brd
> >>>> # mkfs.xfs -f /dev/ram0 
> >>>> meta-data=/dev/ram0              isize=256    agcount=1, agsize=4096
> >>>> blks
> >>>>          =                       sectsz=4096  attr=2, projid32bit=1
> >>>>          =                       crc=0        finobt=0
> >>>> data     =                       bsize=4096   blocks=4096, imaxpct=25
> >>>>          =                       sunit=0      swidth=0 blks
> >>>> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> >>>> log      =internal log           bsize=4096   blocks=1605, version=2
> >>>>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> >>>> realtime =none                   extsz=4096   blocks=0, rtextents=0
> >>>> # mount /dev/ram0 /mnt/
> >>>> # mount | grep mnt
> >>>> # umount  /mnt/
> >>>> umount: /mnt/: not mounted
> >>>>
> >>>> ... and then I can't even mount my normal scratch device until after a
> >>>> reboot:
> >>>>
> >>>> # mount /dev/test/scratch /mnt/
> >>>> # mount | grep mnt
> >>>> # umount  /mnt/
> >>>> umount: /mnt/: not mounted
> >>>
> >>> Ok, so that's just plain broken. What's in dmesg?
> >>>
> >>
> >> Once I got back to this I found that for some reason systemd is
> >> immediately invoking a umount on the mount. :/ No idea why or how to
> >> stop it, but if I do something like this:
> >>
> >> mount /dev/ram0 /mnt; cd /mnt
> >>
> >> ... I can occasionally win the race and get systemd to spin in a
> >> umount() cycle trying to undo the mount. I haven't gone back to confirm
> >> it's the same behavior with the normal devices at that point, but I
> >> suspect it is, perhaps due to getting into some kind of bad state.
> >>
> >> So fyi that this particular problem doesn't appear to be directly kernel
> >> related...
> > 
> > It may still be related to the kernel changes  e.g. by triggering
> > udev events when they didn't previously. The only machine I have
> > that is triggering the partition probing is also the only test
> > machine that I have that runs systemd and it didn't have this
> > problem on 3.19.
> > 
> 
> Sigh, thanks Dave. Yes you are correct my patch enabled the
> udev events, as part of fixing ramdisk with partitions.
> This is because if you do not enable them then mount by UUID
> and all sort of lsblk and friends do not work.

Sure, that's what the gendisk abstraction just you. But why am I
seeing random partition probes on a ramdisk that *isn't using
partitions*?

> I did try to test this in all kind of ways, xfstest+ext4
> as well, and ran with it on Fedora 20 for a while, sorry
> about that.
> 
> It looks like the system anticipates that ramdisk "should
> not have these events"

Right, but not because it's a ramdisk. Those events should not be
occurring because I'm not creating or destroying devices, I'm not
changing partition tables, I'm not resizing ramdisks or partitions,
and so on. I'm simply mkfs'ing, mounting and unmounting filesystems
on the ramdisks - nothing should be generating device based udev
events...

Finding the trigger that is causing these events will tell us what
the bug is - restricting the config won't help, especially as DAX
will *always* be enabled on my test machines as it's something
needed in my test matrix. I'm not sure how to go about finding that
trigger right now and as such I won't really have time to look at it
until after lsfmm/vault...

Cheers,

Dave.
Boaz Harrosh March 1, 2015, 8:27 a.m. UTC | #8
On 02/27/2015 02:58 AM, Dave Chinner wrote:
<>
>>
>> Sigh, thanks Dave. Yes you are correct my patch enabled the
>> udev events, as part of fixing ramdisk with partitions.
>> This is because if you do not enable them then mount by UUID
>> and all sort of lsblk and friends do not work.
> 
> Sure, that's what the gendisk abstraction just you. But why am I
> seeing random partition probes on a ramdisk that *isn't using
> partitions*?
> 

Yes, There should be the one new event on create (modprobe or
mknod) which was not there before. Perhaps it triggers a systemd
process that never used to run before. (And is now sitting there
and making a mess)

<>
>> It looks like the system anticipates that ramdisk "should
>> not have these events"
> 
> Right, but not because it's a ramdisk. Those events should not be
> occurring because I'm not creating or destroying devices, I'm not
> changing partition tables, I'm not resizing ramdisks or partitions,
> and so on. I'm simply mkfs'ing, mounting and unmounting filesystems
> on the ramdisks - nothing should be generating device based udev
> events...
> 
> Finding the trigger that is causing these events will tell us what
> the bug is - 

> restricting the config won't help, especially as DAX
> will *always* be enabled on my test machines as it's something
> needed in my test matrix.

No the "if DAX" is for the 4k thing. The enablement of the uevents is
with a new "part_show" module parameter (See patch-1).

> I'm not sure how to go about finding that
> trigger right now and as such I won't really have time to look at it
> until after lsfmm/vault...
> 

I'll try to reproduce this here. What Fedora version do I need?

> Cheers,
> Dave.
> 

Thanks
Boaz

--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Boaz Harrosh March 1, 2015, 8:49 a.m. UTC | #9
On 02/26/2015 07:23 PM, Brian Foster wrote:
<>
> 
> Thanks Boaz, but I still see the same behavior with the part_show patch.
> It seems to be something that broke in systemd on Fedora between
> versions systemd-218 and systemd-219. The latter is broken on a 3.19
> kernel as well.
> 
> I've filed a systemd bug so we'll see what comes of it from that end:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1196452
> 
> Brian
> 

Hi Brian

It says in bugzilla (link above) that this issue is "fixed in git" so
I guess we should be fine ?

Jens does *not* need to take
	[PATCH] brd: Re-instate ram disk visibility option (part_show)

Please confirm.

Please tell me if there is anything I can help with?

Thanks
Boaz

--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Brian Foster March 1, 2015, 2:28 p.m. UTC | #10
On Sun, Mar 01, 2015 at 10:49:16AM +0200, Boaz Harrosh wrote:
> On 02/26/2015 07:23 PM, Brian Foster wrote:
> <>
> > 
> > Thanks Boaz, but I still see the same behavior with the part_show patch.
> > It seems to be something that broke in systemd on Fedora between
> > versions systemd-218 and systemd-219. The latter is broken on a 3.19
> > kernel as well.
> > 
> > I've filed a systemd bug so we'll see what comes of it from that end:
> > 
> > https://bugzilla.redhat.com/show_bug.cgi?id=1196452
> > 
> > Brian
> > 
> 
> Hi Brian
> 
> It says in bugzilla (link above) that this issue is "fixed in git" so
> I guess we should be fine ?
> 

Yes, I picked up a more recent systemd version and it seems to work fine
now without the patch referenced below.

Brian

> Jens does *not* need to take
> 	[PATCH] brd: Re-instate ram disk visibility option (part_show)
> 
> Please confirm.
> 
> Please tell me if there is anything I can help with?
> 
> Thanks
> Boaz
> 
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner March 2, 2015, 1:09 a.m. UTC | #11
On Sun, Mar 01, 2015 at 10:27:49AM +0200, Boaz Harrosh wrote:
> On 02/27/2015 02:58 AM, Dave Chinner wrote:
> >> It looks like the system anticipates that ramdisk "should
> >> not have these events"
> > 
> > Right, but not because it's a ramdisk. Those events should not be
> > occurring because I'm not creating or destroying devices, I'm not
> > changing partition tables, I'm not resizing ramdisks or partitions,
> > and so on. I'm simply mkfs'ing, mounting and unmounting filesystems
> > on the ramdisks - nothing should be generating device based udev
> > events...
> > 
> > Finding the trigger that is causing these events will tell us what
> > the bug is - 
> 
> > restricting the config won't help, especially as DAX
> > will *always* be enabled on my test machines as it's something
> > needed in my test matrix.
> 
> No the "if DAX" is for the 4k thing. The enablement of the uevents is
> with a new "part_show" module parameter (See patch-1).

Sure, but that doesn't answer my question: what is generating device
level uevents when all I'm doing is mkfs/mount/umount on the device?

> > I'm not sure how to go about finding that
> > trigger right now and as such I won't really have time to look at it
> > until after lsfmm/vault...
> > 
> 
> I'll try to reproduce this here. What Fedora version do I need?

I'm running debian unstable w/ systemd-215 on the particular test
machine that is hitting this problem.

Cheers,

Dave.
Boaz Harrosh March 2, 2015, 9:40 a.m. UTC | #12
On 03/02/2015 03:09 AM, Dave Chinner wrote:
> On Sun, Mar 01, 2015 at 10:27:49AM +0200, Boaz Harrosh wrote:
>> On 02/27/2015 02:58 AM, Dave Chinner wrote:
<>
>> No the "if DAX" is for the 4k thing. The enablement of the uevents is
>> with a new "part_show" module parameter (See patch-1).
> 
> Sure, but that doesn't answer my question: what is generating device
> level uevents when all I'm doing is mkfs/mount/umount on the device?
> 

I was suspecting it is this systemd bug which keeps trying to tier-down
the devices.

>>> I'm not sure how to go about finding that
>>> trigger right now and as such I won't really have time to look at it
>>> until after lsfmm/vault...
>>>
>>
>> I'll try to reproduce this here. What Fedora version do I need?
> 
> I'm running debian unstable w/ systemd-215 on the particular test
> machine that is hitting this problem.
> 

Oooff, On my fedora 20 I'm at systemd 208. I'll see if I have time to
install an fc21 vm or maybe upgrade from source. (Any easy way?)

I have setup my xfs rig and ran "./check -g auto" by now. I tried both
part_show=1/0 and both look working as expected.
(Do I need any special $MKFS_OPTIONS or anything else)

I'll probably be giving up soon, and will just wait for more reports.

With the patch-1 I sent I am reverting to old behavior so I need a
reproducer to try and run with patch-1 and part_show=1 should show
the problem and part_show=0 should not. Else this is something else

> Cheers,
> Dave.
> 

Thanks Dave, sorry for trapping you in this boring mess, life
at Kernel the one reproducing the problem needs to help fix it ;-)

Have an enjoyable and productive LSF
Thanks
Boaz

--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/tests/xfs/104 b/tests/xfs/104
index 69fcc69..ca2ae21 100755
--- a/tests/xfs/104
+++ b/tests/xfs/104
@@ -81,10 +81,10 @@  modsize=`expr   4 \* $incsize`	# pause after this many increments
 [ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
 
 nags=4
-size=`expr 120 \* 1048576`	# 120 megabytes initially
+size=`expr 125 \* 1048576`	# 120 megabytes initially
 sizeb=`expr $size / $dbsize`	# in data blocks
 echo "*** creating scratch filesystem"
-_create_scratch -lsize=5m -dsize=${size} -dagcount=${nags}
+_create_scratch -lsize=10m -dsize=${size} -dagcount=${nags}
 
 fillsize=`expr 110 \* 1048576`	# 110 megabytes of filling
 echo "*** using some initial space on scratch filesystem"
@@ -95,13 +95,13 @@  _fill_scratch $fillsize
 # Kick off more stress threads on each iteration, grow; repeat.
 #
 while [ $size -le $endsize ]; do
-	echo "*** stressing a ${size} byte filesystem"
+	echo "*** stressing filesystem"
 	echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
 	_stress_scratch
 	sleep 1
 	size=`expr $size + $incsize`
 	sizeb=`expr $size / $dbsize`	# in data blocks
-	echo "*** growing to a ${size} byte filesystem"
+	echo "*** growing filesystem"
 	echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
 	xfs_growfs -D ${sizeb} $SCRATCH_MNT \
 		| tee -a $seqres.full | _filter_mkfs 2>$tmp.growfs
diff --git a/tests/xfs/104.out b/tests/xfs/104.out
index f237e5e..de6c7f2 100644
--- a/tests/xfs/104.out
+++ b/tests/xfs/104.out
@@ -15,8 +15,8 @@  log      =LDEV bsize=XXX blocks=XXX
 realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
 *** mount
 *** using some initial space on scratch filesystem
-*** stressing a 125829120 byte filesystem
-*** growing to a 169869312 byte filesystem
+*** stressing filesystem
+*** growing filesystem
 meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
 data     = bsize=XXX blocks=XXX, imaxpct=PCT
          = sunit=XXX swidth=XXX, unwritten=X
@@ -25,8 +25,8 @@  log      =LDEV bsize=XXX blocks=XXX
 realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
 AGCOUNT=4
 
-*** stressing a 169869312 byte filesystem
-*** growing to a 213909504 byte filesystem
+*** stressing filesystem
+*** growing filesystem
 meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
 data     = bsize=XXX blocks=XXX, imaxpct=PCT
          = sunit=XXX swidth=XXX, unwritten=X
@@ -35,8 +35,8 @@  log      =LDEV bsize=XXX blocks=XXX
 realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
 AGCOUNT=6
 
-*** stressing a 213909504 byte filesystem
-*** growing to a 257949696 byte filesystem
+*** stressing filesystem
+*** growing filesystem
 meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
 data     = bsize=XXX blocks=XXX, imaxpct=PCT
          = sunit=XXX swidth=XXX, unwritten=X
@@ -45,8 +45,8 @@  log      =LDEV bsize=XXX blocks=XXX
 realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
 AGCOUNT=7
 
-*** stressing a 257949696 byte filesystem
-*** growing to a 301989888 byte filesystem
+*** stressing filesystem
+*** growing filesystem
 meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
 data     = bsize=XXX blocks=XXX, imaxpct=PCT
          = sunit=XXX swidth=XXX, unwritten=X
@@ -55,8 +55,8 @@  log      =LDEV bsize=XXX blocks=XXX
 realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
 AGCOUNT=9
 
-*** stressing a 301989888 byte filesystem
-*** growing to a 346030080 byte filesystem
+*** stressing filesystem
+*** growing filesystem
 meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
 data     = bsize=XXX blocks=XXX, imaxpct=PCT
          = sunit=XXX swidth=XXX, unwritten=X
@@ -65,8 +65,8 @@  log      =LDEV bsize=XXX blocks=XXX
 realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
 AGCOUNT=10
 
-*** stressing a 346030080 byte filesystem
-*** growing to a 390070272 byte filesystem
+*** stressing filesystem
+*** growing filesystem
 meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
 data     = bsize=XXX blocks=XXX, imaxpct=PCT
          = sunit=XXX swidth=XXX, unwritten=X
@@ -75,8 +75,8 @@  log      =LDEV bsize=XXX blocks=XXX
 realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
 AGCOUNT=11
 
-*** stressing a 390070272 byte filesystem
-*** growing to a 434110464 byte filesystem
+*** stressing filesystem
+*** growing filesystem
 meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
 data     = bsize=XXX blocks=XXX, imaxpct=PCT
          = sunit=XXX swidth=XXX, unwritten=X
@@ -85,8 +85,8 @@  log      =LDEV bsize=XXX blocks=XXX
 realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
 AGCOUNT=13
 
-*** stressing a 434110464 byte filesystem
-*** growing to a 478150656 byte filesystem
+*** stressing filesystem
+*** growing filesystem
 meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
 data     = bsize=XXX blocks=XXX, imaxpct=PCT
          = sunit=XXX swidth=XXX, unwritten=X
@@ -95,18 +95,18 @@  log      =LDEV bsize=XXX blocks=XXX
 realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
 AGCOUNT=14
 
-*** stressing a 478150656 byte filesystem
-*** growing to a 522190848 byte filesystem
+*** stressing filesystem
+*** growing filesystem
 meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
 data     = bsize=XXX blocks=XXX, imaxpct=PCT
          = sunit=XXX swidth=XXX, unwritten=X
 naming   =VERN bsize=XXX
 log      =LDEV bsize=XXX blocks=XXX
 realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
-AGCOUNT=16
+AGCOUNT=15
 
-*** stressing a 522190848 byte filesystem
-*** growing to a 566231040 byte filesystem
+*** stressing filesystem
+*** growing filesystem
 meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
 data     = bsize=XXX blocks=XXX, imaxpct=PCT
          = sunit=XXX swidth=XXX, unwritten=X
@@ -115,8 +115,8 @@  log      =LDEV bsize=XXX blocks=XXX
 realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
 AGCOUNT=17
 
-*** stressing a 566231040 byte filesystem
-*** growing to a 610271232 byte filesystem
+*** stressing filesystem
+*** growing filesystem
 meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
 data     = bsize=XXX blocks=XXX, imaxpct=PCT
          = sunit=XXX swidth=XXX, unwritten=X
diff --git a/tests/xfs/119 b/tests/xfs/119
index c7c46d9..490495b 100755
--- a/tests/xfs/119
+++ b/tests/xfs/119
@@ -54,7 +54,7 @@  _require_scratch
 # this may hang
 sync
 
-export MKFS_OPTIONS="-l version=2,size=1200b,su=64k" 
+export MKFS_OPTIONS="-l version=2,size=2500b,su=64k" 
 export MOUNT_OPTIONS="-o logbsize=64k"
 _scratch_mkfs_xfs >/dev/null
 
diff --git a/tests/xfs/291 b/tests/xfs/291
index fbf9c51..c226e65 100755
--- a/tests/xfs/291
+++ b/tests/xfs/291
@@ -46,7 +46,7 @@  _supported_os IRIX Linux
 # real QA test starts here
 rm -f $seqres.full
 _require_scratch
-_scratch_mkfs_xfs -n size=16k -l size=5m -d size=128m >> $seqres.full 2>&1
+_scratch_mkfs_xfs -n size=16k -l size=10m -d size=133m >> $seqres.full 2>&1
 _scratch_mount
 
 # First we cause very badly fragmented freespace, then
diff --git a/tests/xfs/297 b/tests/xfs/297
index 1cdbbb9..25b597e 100755
--- a/tests/xfs/297
+++ b/tests/xfs/297
@@ -50,7 +50,7 @@  _require_scratch
 _require_freeze
 
 rm -f $seqres.full
-_scratch_mkfs_xfs -d agcount=16,su=256k,sw=12 -l su=256k,size=2560b >/dev/null 2>&1
+_scratch_mkfs_xfs -d agcount=16,su=256k,sw=12 -l su=256k,size=5120b >/dev/null 2>&1
 _scratch_mount >/dev/null 2>&1
 
 STRESS_DIR="$SCRATCH_MNT/testdir"