diff mbox series

[blktests] common/xfs: ignore the 32M log size during mkfs.xfs

Message ID 20221019051244.810755-1-yi.zhang@redhat.com (mailing list archive)
State New, archived
Headers show
Series [blktests] common/xfs: ignore the 32M log size during mkfs.xfs | expand

Commit Message

Yi Zhang Oct. 19, 2022, 5:12 a.m. UTC
The new minimum size for the xfs log is 64MB which introudced from
xfsprogs v5.19.0, let's ignore it, or nvme/013 will be failed at:

$ mkfs.xfs -l size=32m -f /dev/nvme0n1
Log size must be at least 64MB.
Usage: mkfs.xfs
/* blocksize */		[-b size=num]
/* config file */	[-c options=xxx]
/* metadata */		[-m crc=0|1,finobt=0|1,uuid=xxx,rmapbt=0|1,reflink=0|1,
			    inobtcount=0|1,bigtime=0|1]
/* data subvol */	[-d agcount=n,agsize=n,file,name=xxx,size=num,
			    (sunit=value,swidth=value|su=num,sw=num|noalign),
			    sectsize=num
/* force overwrite */	[-f]
/* inode size */	[-i perblock=n|size=num,maxpct=n,attr=0|1|2,
			    projid32bit=0|1,sparse=0|1,nrext64=0|1]
/* no discard */	[-K]
/* log subvol */	[-l agnum=n,internal,size=num,logdev=xxx,version=n
			    sunit=value|su=num,sectsize=num,lazy-count=0|1]
/* label */		[-L label (maximum 12 characters)]
/* naming */		[-n size=num,version=2|ci,ftype=0|1]
/* no-op info only */	[-N]
/* prototype file */	[-p fname]
/* quiet */		[-q]
/* realtime subvol */	[-r extsize=num,size=num,rtdev=xxx]
/* sectorsize */	[-s size=num]
/* version */		[-V]
			devicename
<devicename> is required unless -d name=xxx is given.
<num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
      xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
<value> is xxx (512 byte blocks).

Signed-off-by: Yi Zhang <yi.zhang@redhat.com>
---
 common/xfs | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Chaitanya Kulkarni Oct. 19, 2022, 6:06 a.m. UTC | #1
On 10/18/22 22:12, Yi Zhang wrote:
> The new minimum size for the xfs log is 64MB which introudced from
> xfsprogs v5.19.0, let's ignore it, or nvme/013 will be failed at:
> instead of removing set it to 64MB?
Chaitanya Kulkarni Oct. 19, 2022, 6:16 a.m. UTC | #2
On 10/18/22 22:12, Yi Zhang wrote:
> The new minimum size for the xfs log is 64MB which introudced from
> xfsprogs v5.19.0, let's ignore it, or nvme/013 will be failed at:
> 

instead of removing it set to 64MB ?

-ck
Eric Sandeen Oct. 19, 2022, 2:19 p.m. UTC | #3
On 10/19/22 1:16 AM, Chaitanya Kulkarni wrote:
> On 10/18/22 22:12, Yi Zhang wrote:
>> The new minimum size for the xfs log is 64MB which introudced from
>> xfsprogs v5.19.0, let's ignore it, or nvme/013 will be failed at:
>>
> 
> instead of removing it set to 64MB ?

What is the advantage of hard-coding any log size? By doing so you are
overriding mkfs's own best-practice heuristics, and you might run into
other failures in the future.

Is there a reason to not just use the defaults?

Thanks,
-Eric
Chaitanya Kulkarni Oct. 19, 2022, 7:18 p.m. UTC | #4
On 10/19/22 07:19, Eric Sandeen wrote:
> On 10/19/22 1:16 AM, Chaitanya Kulkarni wrote:
>> On 10/18/22 22:12, Yi Zhang wrote:
>>> The new minimum size for the xfs log is 64MB which introudced from
>>> xfsprogs v5.19.0, let's ignore it, or nvme/013 will be failed at:
>>>
>>
>> instead of removing it set to 64MB ?
> 
> What is the advantage of hard-coding any log size? By doing so you are
> overriding mkfs's own best-practice heuristics, and you might run into
> other failures in the future.
> 
> Is there a reason to not just use the defaults?
> 

I think the point here to use the minimal XFS setup.

Does default size is minimal ? or at least we should document
what the size it is.

-ck
Shinichiro Kawasaki Oct. 21, 2022, 8:58 a.m. UTC | #5
On Oct 19, 2022 / 19:18, Chaitanya Kulkarni wrote:
> On 10/19/22 07:19, Eric Sandeen wrote:
> > On 10/19/22 1:16 AM, Chaitanya Kulkarni wrote:
> >> On 10/18/22 22:12, Yi Zhang wrote:
> >>> The new minimum size for the xfs log is 64MB which introudced from
> >>> xfsprogs v5.19.0, let's ignore it, or nvme/013 will be failed at:
> >>>
> >>
> >> instead of removing it set to 64MB ?
> > 
> > What is the advantage of hard-coding any log size? By doing so you are
> > overriding mkfs's own best-practice heuristics, and you might run into
> > other failures in the future.
> > 
> > Is there a reason to not just use the defaults?
> > 
> 
> I think the point here to use the minimal XFS setup.
> 
> Does default size is minimal ? or at least we should document
> what the size it is.

As far as I read `man mkfs.xfs`, it is not minimal. It changes depending on the
filesystem size.

To have minimal XFS setup, do we need to care other parameters than log size?
I'm looking at 'man mkfs.xfs' and it mentions data section size and some other
size related options.

Two more questions have come up in my mind:

- Did we have nvme driver or block layer issues related to xfs log size in the
  past? If so, it is reasonable to specify it.

- When we see failures of xfs user test cases (nvme/012,013,035), is xfs log
  size useful to debug?
Chaitanya Kulkarni Oct. 21, 2022, 9:42 p.m. UTC | #6
On 10/21/22 01:58, Shinichiro Kawasaki wrote:
> On Oct 19, 2022 / 19:18, Chaitanya Kulkarni wrote:
>> On 10/19/22 07:19, Eric Sandeen wrote:
>>> On 10/19/22 1:16 AM, Chaitanya Kulkarni wrote:
>>>> On 10/18/22 22:12, Yi Zhang wrote:
>>>>> The new minimum size for the xfs log is 64MB which introudced from
>>>>> xfsprogs v5.19.0, let's ignore it, or nvme/013 will be failed at:
>>>>>
>>>>
>>>> instead of removing it set to 64MB ?
>>>
>>> What is the advantage of hard-coding any log size? By doing so you are
>>> overriding mkfs's own best-practice heuristics, and you might run into
>>> other failures in the future.
>>>
>>> Is there a reason to not just use the defaults?
>>>
>>
>> I think the point here to use the minimal XFS setup.
>>
>> Does default size is minimal ? or at least we should document
>> what the size it is.
> 
> As far as I read `man mkfs.xfs`, it is not minimal. It changes depending on the
> filesystem size.
> 
> To have minimal XFS setup, do we need to care other parameters than log size?
> I'm looking at 'man mkfs.xfs' and it mentions data section size and some other
> size related options.
> 
> Two more questions have come up in my mind:
> 
> - Did we have nvme driver or block layer issues related to xfs log size in the
>    past? If so, it is reasonable to specify it.
> 

I think creating a minimal setup is a part of the testcase and we should 
not change it, unless there is a explicit reason for doing so.

> - When we see failures of xfs user test cases (nvme/012,013,035), is xfs log
>    size useful to debug?
> 

I hope so ..

-ck
Shinichiro Kawasaki Oct. 21, 2022, 11:56 p.m. UTC | #7
On Oct 21, 2022 / 21:42, Chaitanya Kulkarni wrote:

...

> I think creating a minimal setup is a part of the testcase and we should 
> not change it, unless there is a explicit reason for doing so.

I see, I find no reason to change the "minimal log size policy". Let's go with
64MB log size to keep it.

Yi, would you mind reposting v2 with size=64m?
Yi Zhang Oct. 23, 2022, 3:27 p.m. UTC | #8
On Sat, Oct 22, 2022 at 7:57 AM Shinichiro Kawasaki
<shinichiro.kawasaki@wdc.com> wrote:
>
> On Oct 21, 2022 / 21:42, Chaitanya Kulkarni wrote:
>
> ...
>
> > I think creating a minimal setup is a part of the testcase and we should
> > not change it, unless there is a explicit reason for doing so.
>
> I see, I find no reason to change the "minimal log size policy". Let's go with
> 64MB log size to keep it.
>
> Yi, would you mind reposting v2 with size=64m?
Sure, and before I post it, I want to ask for suggestions about some
other code changes:

After set log size with 64M, I found nvme/012 nvme/013 will be
failed[1], and there was not enough space for fio with size=950m
testing.
Either [2] or [3] works, which one do you prefer, or do you have some
other suggestion for it? Thanks.

[1]
nvme/012 (run mkfs and data verification fio job on NVMeOF block
device-backed ns) [failed]
    runtime  54.400s  ...  53.514s
    --- tests/nvme/012.out 2022-10-21 10:21:12.461157830 -0400
    +++ /root/blktests/results/nodev/nvme/012.out.bad 2022-10-23
09:10:01.125990759 -0400
    @@ -1,5 +1,22 @@
     Running nvme/012
     91fdba0d-f87b-4c25-b80f-db7be1418b9e
     uuid.91fdba0d-f87b-4c25-b80f-db7be1418b9e
    +fio: io_u error on file /mnt/blktests//verify.0.0: No space left
on device: write offset=292646912, buflen=4096
    +fio: io_u error on file /mnt/blktests//verify.0.0: No space left
on device: write offset=336576512, buflen=4096
    +fio: io_u error on file /mnt/blktests//verify.0.0: No space left
on device: write offset=144633856, buflen=4096
    +fio: io_u error on file /mnt/blktests//verify.0.0: No space left
on device: write offset=149147648, buflen=4096
    ...
    (Run 'diff -u tests/nvme/012.out
/root/blktests/results/nodev/nvme/012.out.bad' to see the entire diff)
nvme/013 (run mkfs and data verification fio job on NVMeOF file-backed
ns) [failed]
    runtime  172.121s  ...  292.743s
    --- tests/nvme/013.out 2022-10-21 10:21:12.461157830 -0400
    +++ /root/blktests/results/nodev/nvme/013.out.bad 2022-10-23
09:14:43.493752127 -0400
    @@ -1,5 +1,17 @@
     Running nvme/013
     91fdba0d-f87b-4c25-b80f-db7be1418b9e
     uuid.91fdba0d-f87b-4c25-b80f-db7be1418b9e
    +fio: io_u error on file /mnt/blktests//verify.0.0: No space left
on device: write offset=665759744, buflen=4096
    +fio: io_u error on file /mnt/blktests//verify.0.0: No space left
on device: write offset=198172672, buflen=4096
    +fio: io_u error on file /mnt/blktests//verify.0.0: No space left
on device: write offset=369643520, buflen=4096
    +fio: io_u error on file /mnt/blktests//verify.0.0: No space left
on device: write offset=195624960, buflen=4096
    ...
    (Run 'diff -u tests/nvme/013.out
/root/blktests/results/nodev/nvme/013.out.bad' to see the entire diff)

[2]
diff --git a/tests/nvme/012 b/tests/nvme/012
index c9d2438..25bde25 100755
--- a/tests/nvme/012
+++ b/tests/nvme/012
@@ -29,7 +29,7 @@ test() {
        local file_path="${TMPDIR}/img"
        local subsys_name="blktests-subsystem-1"

-       truncate -s 1G "${file_path}"
+       truncate -s 2G "${file_path}"

        loop_dev="$(losetup -f --show "${file_path}")"

diff --git a/tests/nvme/013 b/tests/nvme/013
index 265b696..602c86f 100755
--- a/tests/nvme/013
+++ b/tests/nvme/013
@@ -28,7 +28,7 @@ test() {

        local subsys_name="blktests-subsystem-1"

-       truncate -s 1G "${file_path}"
+       truncate -s 2G "${file_path}"

        _create_nvmet_subsystem "${subsys_name}" "${file_path}" \
                "91fdba0d-f87b-4c25-b80f-db7be1418b9e"
[3]
diff --git a/common/xfs b/common/xfs
index 210c924..0018212 100644
--- a/common/xfs
+++ b/common/xfs
@@ -26,7 +26,7 @@ _xfs_run_fio_verify_io() {

        _xfs_mkfs_and_mount "${bdev}" "${mount_dir}" >> "${FULL}" 2>&1

-       _run_fio_verify_io --size=950m --directory="${mount_dir}/"
+       _run_fio_verify_io --size=900m --directory="${mount_dir}/"

        umount "${mount_dir}" >> "${FULL}" 2>&1
        rm -fr "${mount_dir}"
>
> --
> Shin'ichiro Kawasaki
>
Shinichiro Kawasaki Oct. 24, 2022, 12:50 a.m. UTC | #9
On Oct 23, 2022 / 23:27, Yi Zhang wrote:
> On Sat, Oct 22, 2022 at 7:57 AM Shinichiro Kawasaki
> <shinichiro.kawasaki@wdc.com> wrote:
> >
> > On Oct 21, 2022 / 21:42, Chaitanya Kulkarni wrote:
> >
> > ...
> >
> > > I think creating a minimal setup is a part of the testcase and we should
> > > not change it, unless there is a explicit reason for doing so.
> >
> > I see, I find no reason to change the "minimal log size policy". Let's go with
> > 64MB log size to keep it.
> >
> > Yi, would you mind reposting v2 with size=64m?
> Sure, and before I post it, I want to ask for suggestions about some
> other code changes:
> 
> After set log size with 64M, I found nvme/012 nvme/013 will be
> failed[1], and there was not enough space for fio with size=950m
> testing.
> Either [2] or [3] works, which one do you prefer, or do you have some
> other suggestion for it? Thanks.

Thank you for testing. I guess fio I/O size=950m was chosen subtracting some
super block and log size from 1GB NVME device size. Now we increase the log
size, then the I/O size 950m is larger than the usable xfs size, probably.

Chaitania, what' your thought about the fix approach? To keep the "minimal log
size policy", I guess the approach [3] to reduce fio I/O size to 900m is more
appropriate, but would like to hear your insight.


From Yi's observation, I found a couple of improvement opportunities which are
beyond scope of this fix. Here I note them as memorandum (patches are welcome :)

1) Assuming nvme device size 1GB define in nvme/012 and nvme/013 has relation to
   the fio I/O size 950m defined in common/xfs, these values should be defined
   at single place. Probably we should define both in nvme/012 and nvme/013.

2) The fio I/O size 950m is defined in _xfs_run_fio_verify_io() which is called
   from nvme/035. Then, it is implicitly assumed that TEST_DEV for nvme/035 has
   size 1GB (or larger). I found that nvme/035 fails with 512MB nvme device.
   We should fix this by calculating fio I/O size from TEST_DEV size. (Or
   require 1GB nvme device size for the test case.)
Yi Zhang Oct. 24, 2022, 6:32 a.m. UTC | #10
On Mon, Oct 24, 2022 at 8:50 AM Shinichiro Kawasaki
<shinichiro.kawasaki@wdc.com> wrote:
>
> On Oct 23, 2022 / 23:27, Yi Zhang wrote:
> > On Sat, Oct 22, 2022 at 7:57 AM Shinichiro Kawasaki
> > <shinichiro.kawasaki@wdc.com> wrote:
> > >
> > > On Oct 21, 2022 / 21:42, Chaitanya Kulkarni wrote:
> > >
> > > ...
> > >
> > > > I think creating a minimal setup is a part of the testcase and we should
> > > > not change it, unless there is a explicit reason for doing so.
> > >
> > > I see, I find no reason to change the "minimal log size policy". Let's go with
> > > 64MB log size to keep it.
> > >
> > > Yi, would you mind reposting v2 with size=64m?
> > Sure, and before I post it, I want to ask for suggestions about some
> > other code changes:
> >
> > After set log size with 64M, I found nvme/012 nvme/013 will be
> > failed[1], and there was not enough space for fio with size=950m
> > testing.
> > Either [2] or [3] works, which one do you prefer, or do you have some
> > other suggestion for it? Thanks.
>
> Thank you for testing. I guess fio I/O size=950m was chosen subtracting some
> super block and log size from 1GB NVME device size. Now we increase the log
> size, then the I/O size 950m is larger than the usable xfs size, probably.
>
> Chaitania, what' your thought about the fix approach? To keep the "minimal log
> size policy", I guess the approach [3] to reduce fio I/O size to 900m is more
> appropriate, but would like to hear your insight.
>
>
> From Yi's observation, I found a couple of improvement opportunities which are
> beyond scope of this fix. Here I note them as memorandum (patches are welcome :)

I resend the patch to set log size to 64M and another two which could
address below :)

>
> 1) Assuming nvme device size 1GB define in nvme/012 and nvme/013 has relation to
>    the fio I/O size 950m defined in common/xfs, these values should be defined
>    at single place. Probably we should define both in nvme/012 and nvme/013.
>
> 2) The fio I/O size 950m is defined in _xfs_run_fio_verify_io() which is called
>    from nvme/035. Then, it is implicitly assumed that TEST_DEV for nvme/035 has
>    size 1GB (or larger). I found that nvme/035 fails with 512MB nvme device.
>    We should fix this by calculating fio I/O size from TEST_DEV size. (Or
>    require 1GB nvme device size for the test case.)
>
> --
> Shin'ichiro Kawasaki
>
Shinichiro Kawasaki Oct. 24, 2022, 10:31 a.m. UTC | #11
On Oct 24, 2022 / 14:32, Yi Zhang wrote:
> On Mon, Oct 24, 2022 at 8:50 AM Shinichiro Kawasaki
> <shinichiro.kawasaki@wdc.com> wrote:

...

> > From Yi's observation, I found a couple of improvement opportunities which are
> > beyond scope of this fix. Here I note them as memorandum (patches are welcome :)
> 
> I resend the patch to set log size to 64M and another two which could
> address below :)

Thanks! I'll wait for comment by Chaitanya then review your patches.

> 
> >
> > 1) Assuming nvme device size 1GB define in nvme/012 and nvme/013 has relation to
> >    the fio I/O size 950m defined in common/xfs, these values should be defined
> >    at single place. Probably we should define both in nvme/012 and nvme/013.
> >
> > 2) The fio I/O size 950m is defined in _xfs_run_fio_verify_io() which is called
> >    from nvme/035. Then, it is implicitly assumed that TEST_DEV for nvme/035 has
> >    size 1GB (or larger). I found that nvme/035 fails with 512MB nvme device.
> >    We should fix this by calculating fio I/O size from TEST_DEV size. (Or
> >    require 1GB nvme device size for the test case.)
> >
> > --
> > Shin'ichiro Kawasaki
> >
> 
> 
> -- 
> Best Regards,
>   Yi Zhang
>
Chaitanya Kulkarni Oct. 25, 2022, 12:42 a.m. UTC | #12
Shinichiro/Yi,

On 10/23/22 17:50, Shinichiro Kawasaki wrote:
> On Oct 23, 2022 / 23:27, Yi Zhang wrote:
>> On Sat, Oct 22, 2022 at 7:57 AM Shinichiro Kawasaki
>> <shinichiro.kawasaki@wdc.com> wrote:
>>>
>>> On Oct 21, 2022 / 21:42, Chaitanya Kulkarni wrote:
>>>
>>> ...
>>>
>>>> I think creating a minimal setup is a part of the testcase and we should
>>>> not change it, unless there is a explicit reason for doing so.
>>>
>>> I see, I find no reason to change the "minimal log size policy". Let's go with
>>> 64MB log size to keep it.
>>>
>>> Yi, would you mind reposting v2 with size=64m?
>> Sure, and before I post it, I want to ask for suggestions about some
>> other code changes:
>>
>> After set log size with 64M, I found nvme/012 nvme/013 will be
>> failed[1], and there was not enough space for fio with size=950m
>> testing.
>> Either [2] or [3] works, which one do you prefer, or do you have some
>> other suggestion for it? Thanks.
> 
> Thank you for testing. I guess fio I/O size=950m was chosen subtracting some
> super block and log size from 1GB NVME device size. Now we increase the log
> size, then the I/O size 950m is larger than the usable xfs size, probably.
> 
> Chaitania, what' your thought about the fix approach? To keep the "minimal log
> size policy", I guess the approach [3] to reduce fio I/O size to 900m is more
> appropriate, but would like to hear your insight.

I'm fine with adjusting the size to it can fit with new minimum log
sizes.

> 
> 
>  From Yi's observation, I found a couple of improvement opportunities which are
> beyond scope of this fix. Here I note them as memorandum (patches are welcome :)
> 
> 1) Assuming nvme device size 1GB define in nvme/012 and nvme/013 has relation to
>     the fio I/O size 950m defined in common/xfs, these values should be defined
>     at single place. Probably we should define both in nvme/012 and nvme/013.

Agree.

> 
> 2) The fio I/O size 950m is defined in _xfs_run_fio_verify_io() which is called
>     from nvme/035. Then, it is implicitly assumed that TEST_DEV for nvme/035 has
>     size 1GB (or larger). I found that nvme/035 fails with 512MB nvme device.
>     We should fix this by calculating fio I/O size from TEST_DEV size. (Or
>     require 1GB nvme device size for the test case.)
> 

Also, agree on this.

Above two listed fixes should be done as a part of this fix only.

I'd expect to see a patch series to fix all the issues listed above,
please CC me so I can review this with priority.

-ck
Shinichiro Kawasaki Oct. 25, 2022, 1:53 a.m. UTC | #13
On Oct 25, 2022 / 00:42, Chaitanya Kulkarni wrote:
> Shinichiro/Yi,
> 
> On 10/23/22 17:50, Shinichiro Kawasaki wrote:
> > On Oct 23, 2022 / 23:27, Yi Zhang wrote:
> >> On Sat, Oct 22, 2022 at 7:57 AM Shinichiro Kawasaki
> >> <shinichiro.kawasaki@wdc.com> wrote:
> >>>
> >>> On Oct 21, 2022 / 21:42, Chaitanya Kulkarni wrote:
> >>>
> >>> ...
> >>>
> >>>> I think creating a minimal setup is a part of the testcase and we should
> >>>> not change it, unless there is a explicit reason for doing so.
> >>>
> >>> I see, I find no reason to change the "minimal log size policy". Let's go with
> >>> 64MB log size to keep it.
> >>>
> >>> Yi, would you mind reposting v2 with size=64m?
> >> Sure, and before I post it, I want to ask for suggestions about some
> >> other code changes:
> >>
> >> After set log size with 64M, I found nvme/012 nvme/013 will be
> >> failed[1], and there was not enough space for fio with size=950m
> >> testing.
> >> Either [2] or [3] works, which one do you prefer, or do you have some
> >> other suggestion for it? Thanks.
> > 
> > Thank you for testing. I guess fio I/O size=950m was chosen subtracting some
> > super block and log size from 1GB NVME device size. Now we increase the log
> > size, then the I/O size 950m is larger than the usable xfs size, probably.
> > 
> > Chaitania, what' your thought about the fix approach? To keep the "minimal log
> > size policy", I guess the approach [3] to reduce fio I/O size to 900m is more
> > appropriate, but would like to hear your insight.
> 
> I'm fine with adjusting the size to it can fit with new minimum log
> sizes.

Thank you for the comment. Then let's go with the size 900m.

> 
> > 
> > 
> >  From Yi's observation, I found a couple of improvement opportunities which are
> > beyond scope of this fix. Here I note them as memorandum (patches are welcome :)
> > 
> > 1) Assuming nvme device size 1GB define in nvme/012 and nvme/013 has relation to
> >     the fio I/O size 950m defined in common/xfs, these values should be defined
> >     at single place. Probably we should define both in nvme/012 and nvme/013.
> 
> Agree.
> 
> > 
> > 2) The fio I/O size 950m is defined in _xfs_run_fio_verify_io() which is called
> >     from nvme/035. Then, it is implicitly assumed that TEST_DEV for nvme/035 has
> >     size 1GB (or larger). I found that nvme/035 fails with 512MB nvme device.
> >     We should fix this by calculating fio I/O size from TEST_DEV size. (Or
> >     require 1GB nvme device size for the test case.)
> > 
> 
> Also, agree on this.
> 
> Above two listed fixes should be done as a part of this fix only.
> 
> I'd expect to see a patch series to fix all the issues listed above,
> please CC me so I can review this with priority.

Thank you for these comments also. Yi already posted the series :)

https://lore.kernel.org/linux-block/20221024061319.1133470-1-yi.zhang@redhat.com/
diff mbox series

Patch

diff --git a/common/xfs b/common/xfs
index 210c924..f0c9b7f 100644
--- a/common/xfs
+++ b/common/xfs
@@ -16,7 +16,7 @@  _xfs_mkfs_and_mount() {
 
 	mkdir -p "${mount_dir}"
 	umount "${mount_dir}"
-	mkfs.xfs -l size=32m -f "${bdev}"
+	mkfs.xfs -f "${bdev}"
 	mount "${bdev}" "${mount_dir}"
 }