Message ID | 2d118ed03559472a0bf878509a32a9dded03efb2.1692600259.git.naohiro.aota@wdc.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | use shuf to choose a random file | expand |
LGTM
Reviewed-by: Anand Jain <anand.jain@oracle.com>
diff --git a/tests/btrfs/004 b/tests/btrfs/004 index ea40dbf62880..78df6a3af6b1 100755 --- a/tests/btrfs/004 +++ b/tests/btrfs/004 @@ -201,7 +201,7 @@ workout() cnt=0 errcnt=0 dir="$SCRATCH_MNT/$snap_name/" - for file in `find $dir -name f\* -size +0 | sort -R`; do + for file in `find $dir -name f\* -size +0 | shuf`; do extents=`_check_file_extents $file` ret=$? if [ $ret -ne 0 ]; then
The "sort -R" is slower than "shuf" even with the full output because "sort -R" actually sort them to group the identical keys. $ time bash -c "seq 1000000 | shuf >/dev/null" bash -c "seq 1000000 | shuf >/dev/null" 0.18s user 0.03s system 104% cpu 0.196 total $ time bash -c "seq 1000000 | sort -R >/dev/null" bash -c "seq 1000000 | sort -R >/dev/null" 19.61s user 0.03s system 99% cpu 19.739 total Since the "find"'s outputs never be identical, we can just use "shuf" to optimize the selection. Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> --- tests/btrfs/004 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)