diff mbox series

[21/28] generic/531: limit max files per CPU

Message ID 20250417031208.1852171-22-david@fromorbit.com (mailing list archive)
State New
Headers show
Series check-parallel: Running tests without check | expand

Commit Message

Dave Chinner April 17, 2025, 3:01 a.m. UTC
From: Dave Chinner <dchinner@redhat.com>

Currently g/531 runs t_open_files on every CPU, and with default
kernel settings that means 50,000 files per CPU are tested. On 64p
machines this means the test tries to create and unlink over 3
million files. This takes a long time:

Ten slowest tests - runtime in seconds:
generic/531 534
.....

Yet generic/531 is included in the 'quick' test group. It is
anything but "quick" on large CPU count systems.

Further, small filesystems  like are typically used for fstests do
not have the inherent concurrency to scale out this workload
effectively. Even using the mkfs.xfs concurrency options requires
using >250GB scratch devices on 64p machines because it won't make
AGs smaller than 4GB. Hence to get 64-way concurrency in the
filesystem, we need huge devices to be set up, and that's not really
practical for check-parallel.

Hence limit the total number of files this test will create
to a sane number, and distribute them over all the CPUs so that
the test runtime does not blow out on big systems. LOAD_FACTOR can
still be used to increase runtime of the test by increasing the
total number of files created.

Limiting the total number of files created brings g/531
system back into the "quick" test range on a 64p system:

generic/531        5s

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 tests/generic/531 | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)
diff mbox series

Patch

diff --git a/tests/generic/531 b/tests/generic/531
index 07dffd9fd..3f691c0f8 100755
--- a/tests/generic/531
+++ b/tests/generic/531
@@ -29,13 +29,13 @@  _scratch_mount
 # Try to load up all the CPUs, two threads per CPU.
 nr_cpus=$(( $(getconf _NPROCESSORS_ONLN) * 2 ))
 
-# Set ULIMIT_NOFILE to min(file-max / $nr_cpus / 2, 50000 files per LOAD_FACTOR)
+# Set ULIMIT_NOFILE to min(file-max / 2, 100000) / $nr_cpus files per LOAD_FACTOR)
 # so that this test doesn't take forever or OOM the box
-max_files=$((50000 * LOAD_FACTOR))
-max_allowable_files=$(( $(cat /proc/sys/fs/file-max) / $nr_cpus / 2 ))
+max_files=$((100000 * LOAD_FACTOR))
+max_allowable_files=$(( $(cat /proc/sys/fs/file-max) / 2 ))
 test $max_allowable_files -gt 0 && test $max_files -gt $max_allowable_files && \
 	max_files=$max_allowable_files
-ulimit -n $max_files
+ulimit -n $((max_files / nr_cpus))
 
 # Open a lot of unlinked files
 echo create >> $seqres.full