diff mbox series

[md-6.10,5/9] md: replace sysfs api sync_action with new helpers

Message ID 20240509011900.2694291-6-yukuai1@huaweicloud.com (mailing list archive)
State Not Applicable, archived
Delegated to: Mike Snitzer
Headers show
Series md: refactor and cleanup for sync action | expand

Commit Message

Yu Kuai May 9, 2024, 1:18 a.m. UTC
From: Yu Kuai <yukuai3@huawei.com>

To get rid of extrem long if else if usage, and make code cleaner.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/md.c | 94 +++++++++++++++++++++++++++----------------------
 1 file changed, 52 insertions(+), 42 deletions(-)

Comments

kernel test robot May 20, 2024, 3:01 p.m. UTC | #1
Hello,

kernel test robot noticed "mdadm-selftests.07reshape5intr.fail" on:

commit: 18effaab5f57ef44763e537c782f905e06f6c4f5 ("[PATCH md-6.10 5/9] md: replace sysfs api sync_action with new helpers")
url: https://github.com/intel-lab-lkp/linux/commits/Yu-Kuai/md-rearrange-recovery_flage/20240509-093248
base: https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git for-next
patch link: https://lore.kernel.org/all/20240509011900.2694291-6-yukuai1@huaweicloud.com/
patch subject: [PATCH md-6.10 5/9] md: replace sysfs api sync_action with new helpers

in testcase: mdadm-selftests
version: mdadm-selftests-x86_64-5f41845-1_20240412
with following parameters:

	disk: 1HDD
	test_prefix: 07reshape5intr



compiler: gcc-13
test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4790T CPU @ 2.70GHz (Haswell) with 16G memory

(please refer to attached dmesg/kmsg for entire log/backtrace)




If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202405202204.4e3dc662-oliver.sang@intel.com

2024-05-14 21:36:26 mkdir -p /var/tmp
2024-05-14 21:36:26 mke2fs -t ext3 -b 4096 -J size=4 -q /dev/sda1
2024-05-14 21:36:57 mount -t ext3 /dev/sda1 /var/tmp
sed -e 's/{DEFAULT_METADATA}/1.2/g' \
-e 's,{MAP_PATH},/run/mdadm/map,g'  mdadm.8.in > mdadm.8
/usr/bin/install -D -m 644 mdadm.8 /usr/share/man/man8/mdadm.8
/usr/bin/install -D -m 644 mdmon.8 /usr/share/man/man8/mdmon.8
/usr/bin/install -D -m 644 md.4 /usr/share/man/man4/md.4
/usr/bin/install -D -m 644 mdadm.conf.5 /usr/share/man/man5/mdadm.conf.5
/usr/bin/install -D -m 644 udev-md-raid-creating.rules /lib/udev/rules.d/01-md-raid-creating.rules
/usr/bin/install -D -m 644 udev-md-raid-arrays.rules /lib/udev/rules.d/63-md-raid-arrays.rules
/usr/bin/install -D -m 644 udev-md-raid-assembly.rules /lib/udev/rules.d/64-md-raid-assembly.rules
/usr/bin/install -D -m 644 udev-md-clustered-confirm-device.rules /lib/udev/rules.d/69-md-clustered-confirm-device.rules
/usr/bin/install -D  -m 755 mdadm /sbin/mdadm
/usr/bin/install -D  -m 755 mdmon /sbin/mdmon
Testing on linux-6.9.0-rc2-00012-g18effaab5f57 kernel
/lkp/benchmarks/mdadm-selftests/tests/07reshape5intr... FAILED - see /var/tmp/07reshape5intr.log and /var/tmp/fail07reshape5intr.log for details



The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240520/202405202204.4e3dc662-oliver.sang@intel.com
Yu Kuai May 21, 2024, 2:20 a.m. UTC | #2
Hi,

在 2024/05/20 23:01, kernel test robot 写道:
> 
> 
> Hello,
> 
> kernel test robot noticed "mdadm-selftests.07reshape5intr.fail" on:
> 
> commit: 18effaab5f57ef44763e537c782f905e06f6c4f5 ("[PATCH md-6.10 5/9] md: replace sysfs api sync_action with new helpers")
> url: https://github.com/intel-lab-lkp/linux/commits/Yu-Kuai/md-rearrange-recovery_flage/20240509-093248
> base: https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git for-next
> patch link: https://lore.kernel.org/all/20240509011900.2694291-6-yukuai1@huaweicloud.com/
> patch subject: [PATCH md-6.10 5/9] md: replace sysfs api sync_action with new helpers
> 
> in testcase: mdadm-selftests
> version: mdadm-selftests-x86_64-5f41845-1_20240412
> with following parameters:
> 
> 	disk: 1HDD
> 	test_prefix: 07reshape5intr
> 
> 
> 
> compiler: gcc-13
> test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4790T CPU @ 2.70GHz (Haswell) with 16G memory
> 
> (please refer to attached dmesg/kmsg for entire log/backtrace)
> 
> 
> 
> 
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <oliver.sang@intel.com>
> | Closes: https://lore.kernel.org/oe-lkp/202405202204.4e3dc662-oliver.sang@intel.com
> 
> 2024-05-14 21:36:26 mkdir -p /var/tmp
> 2024-05-14 21:36:26 mke2fs -t ext3 -b 4096 -J size=4 -q /dev/sda1
> 2024-05-14 21:36:57 mount -t ext3 /dev/sda1 /var/tmp
> sed -e 's/{DEFAULT_METADATA}/1.2/g' \
> -e 's,{MAP_PATH},/run/mdadm/map,g'  mdadm.8.in > mdadm.8
> /usr/bin/install -D -m 644 mdadm.8 /usr/share/man/man8/mdadm.8
> /usr/bin/install -D -m 644 mdmon.8 /usr/share/man/man8/mdmon.8
> /usr/bin/install -D -m 644 md.4 /usr/share/man/man4/md.4
> /usr/bin/install -D -m 644 mdadm.conf.5 /usr/share/man/man5/mdadm.conf.5
> /usr/bin/install -D -m 644 udev-md-raid-creating.rules /lib/udev/rules.d/01-md-raid-creating.rules
> /usr/bin/install -D -m 644 udev-md-raid-arrays.rules /lib/udev/rules.d/63-md-raid-arrays.rules
> /usr/bin/install -D -m 644 udev-md-raid-assembly.rules /lib/udev/rules.d/64-md-raid-assembly.rules
> /usr/bin/install -D -m 644 udev-md-clustered-confirm-device.rules /lib/udev/rules.d/69-md-clustered-confirm-device.rules
> /usr/bin/install -D  -m 755 mdadm /sbin/mdadm
> /usr/bin/install -D  -m 755 mdmon /sbin/mdmon
> Testing on linux-6.9.0-rc2-00012-g18effaab5f57 kernel
> /lkp/benchmarks/mdadm-selftests/tests/07reshape5intr... FAILED - see /var/tmp/07reshape5intr.log and /var/tmp/fail07reshape5intr.log for detail
[root@fedora mdadm]# ./test --dev=loop --tests=07reshape5intr
test: skipping tests for multipath, which is removed in upstream 6.8+ 
kernels
test: skipping tests for linear, which is removed in upstream 6.8+ kernels
Testing on linux-6.9.0-rc2-00023-gf092583596a2 kernel
/root/mdadm/tests/07reshape5intr... FAILED - see 
/var/tmp/07reshape5intr.log and /var/tmp/fail07reshape5intr.log for details
   (KNOWN BROKEN TEST: always fails)

So, since this test is marked BROKEN.

Please share the whole log, and is it possible to share the two logs?

Thanks,
Kuai

> 
> 
> 
> The kernel config and materials to reproduce are available at:
> https://download.01.org/0day-ci/archive/20240520/202405202204.4e3dc662-oliver.sang@intel.com
> 
> 
>
kernel test robot May 21, 2024, 3:01 a.m. UTC | #3
hi, Yu Kuai,

On Tue, May 21, 2024 at 10:20:54AM +0800, Yu Kuai wrote:
> Hi,
> 
> 在 2024/05/20 23:01, kernel test robot 写道:
> > 
> > 
> > Hello,
> > 
> > kernel test robot noticed "mdadm-selftests.07reshape5intr.fail" on:
> > 
> > commit: 18effaab5f57ef44763e537c782f905e06f6c4f5 ("[PATCH md-6.10 5/9] md: replace sysfs api sync_action with new helpers")
> > url: https://github.com/intel-lab-lkp/linux/commits/Yu-Kuai/md-rearrange-recovery_flage/20240509-093248
> > base: https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git for-next
> > patch link: https://lore.kernel.org/all/20240509011900.2694291-6-yukuai1@huaweicloud.com/
> > patch subject: [PATCH md-6.10 5/9] md: replace sysfs api sync_action with new helpers
> > 
> > in testcase: mdadm-selftests
> > version: mdadm-selftests-x86_64-5f41845-1_20240412
> > with following parameters:
> > 
> > 	disk: 1HDD
> > 	test_prefix: 07reshape5intr
> > 
> > 
> > 
> > compiler: gcc-13
> > test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4790T CPU @ 2.70GHz (Haswell) with 16G memory
> > 
> > (please refer to attached dmesg/kmsg for entire log/backtrace)
> > 
> > 
> > 
> > 
> > If you fix the issue in a separate patch/commit (i.e. not just a new version of
> > the same patch/commit), kindly add following tags
> > | Reported-by: kernel test robot <oliver.sang@intel.com>
> > | Closes: https://lore.kernel.org/oe-lkp/202405202204.4e3dc662-oliver.sang@intel.com
> > 
> > 2024-05-14 21:36:26 mkdir -p /var/tmp
> > 2024-05-14 21:36:26 mke2fs -t ext3 -b 4096 -J size=4 -q /dev/sda1
> > 2024-05-14 21:36:57 mount -t ext3 /dev/sda1 /var/tmp
> > sed -e 's/{DEFAULT_METADATA}/1.2/g' \
> > -e 's,{MAP_PATH},/run/mdadm/map,g'  mdadm.8.in > mdadm.8
> > /usr/bin/install -D -m 644 mdadm.8 /usr/share/man/man8/mdadm.8
> > /usr/bin/install -D -m 644 mdmon.8 /usr/share/man/man8/mdmon.8
> > /usr/bin/install -D -m 644 md.4 /usr/share/man/man4/md.4
> > /usr/bin/install -D -m 644 mdadm.conf.5 /usr/share/man/man5/mdadm.conf.5
> > /usr/bin/install -D -m 644 udev-md-raid-creating.rules /lib/udev/rules.d/01-md-raid-creating.rules
> > /usr/bin/install -D -m 644 udev-md-raid-arrays.rules /lib/udev/rules.d/63-md-raid-arrays.rules
> > /usr/bin/install -D -m 644 udev-md-raid-assembly.rules /lib/udev/rules.d/64-md-raid-assembly.rules
> > /usr/bin/install -D -m 644 udev-md-clustered-confirm-device.rules /lib/udev/rules.d/69-md-clustered-confirm-device.rules
> > /usr/bin/install -D  -m 755 mdadm /sbin/mdadm
> > /usr/bin/install -D  -m 755 mdmon /sbin/mdmon
> > Testing on linux-6.9.0-rc2-00012-g18effaab5f57 kernel
> > /lkp/benchmarks/mdadm-selftests/tests/07reshape5intr... FAILED - see /var/tmp/07reshape5intr.log and /var/tmp/fail07reshape5intr.log for detail
> [root@fedora mdadm]# ./test --dev=loop --tests=07reshape5intr
> test: skipping tests for multipath, which is removed in upstream 6.8+
> kernels
> test: skipping tests for linear, which is removed in upstream 6.8+ kernels
> Testing on linux-6.9.0-rc2-00023-gf092583596a2 kernel
> /root/mdadm/tests/07reshape5intr... FAILED - see /var/tmp/07reshape5intr.log
> and /var/tmp/fail07reshape5intr.log for details
>   (KNOWN BROKEN TEST: always fails)
> 
> So, since this test is marked BROKEN.
> 
> Please share the whole log, and is it possible to share the two logs?


we only captured one log as attached log-18effaab5f.
also attached parent log FYI.


> 
> Thanks,
> Kuai
> 
> > 
> > 
> > 
> > The kernel config and materials to reproduce are available at:
> > https://download.01.org/0day-ci/archive/20240520/202405202204.4e3dc662-oliver.sang@intel.com
> > 
> > 
> > 
>
+ . /lkp/benchmarks/mdadm-selftests/tests/07reshape5intr
++ set -x
++ devs=/dev/loop1
++ st=UU
++ for disks in 2 3 4 5
++ eval 'devs="/dev/loop1' '$dev2"'
+++ devs='/dev/loop1 /dev/loop2'
++ st=UUU
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop1 bs=1024
dd: error writing '/dev/loop1': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.909885 s, 22.5 MB/s
++ true
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop2 bs=1024
dd: error writing '/dev/loop2': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.633683 s, 32.3 MB/s
++ true
++ case $disks in
++ chunk=1024
++ mdadm -CR /dev/md0 -amd -l5 -c 1024 -n2 --assume-clean /dev/loop1 /dev/loop2
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ for args in $*
++ [[ -CR =~ /dev/ ]]
++ for args in $*
++ [[ /dev/md0 =~ /dev/ ]]
++ [[ /dev/md0 =~ md ]]
++ for args in $*
++ [[ -amd =~ /dev/ ]]
++ for args in $*
++ [[ -l5 =~ /dev/ ]]
++ for args in $*
++ [[ -c =~ /dev/ ]]
++ for args in $*
++ [[ 1024 =~ /dev/ ]]
++ for args in $*
++ [[ -n2 =~ /dev/ ]]
++ for args in $*
++ [[ --assume-clean =~ /dev/ ]]
++ for args in $*
++ [[ /dev/loop1 =~ /dev/ ]]
++ [[ /dev/loop1 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop1
mdadm: Unrecognised md component device - /dev/loop1
++ for args in $*
++ [[ /dev/loop2 =~ /dev/ ]]
++ [[ /dev/loop2 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop2
mdadm: Unrecognised md component device - /dev/loop2
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet -CR /dev/md0 -amd -l5 -c 1024 -n2 --assume-clean /dev/loop1 /dev/loop2 --auto=yes
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ mdadm /dev/md0 --add /dev/loop6
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet /dev/md0 --add /dev/loop6
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ echo 20
++ echo 20
++ mdadm --grow /dev/md0 -n 3
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet --grow /dev/md0 -n 3
++ rv=1
++ case $* in
++ cat /var/tmp/stderr
mdadm: Failed to initiate reshape!
++ return 1
++ check reshape
++ case $1 in
++ cnt=5
++ grep -sq reshape /proc/mdstat
++ '[' 5 -gt 0 ']'
++ grep -v idle /sys/block/md0/md/sync_action
++ die 'no reshape happening'
++ echo -e '\n\tERROR: no reshape happening \n'

	ERROR: no reshape happening 

++ save_log fail
++ status=fail
++ logfile=fail07reshape5intr.log
++ cat /var/tmp/stderr
++ cp /var/tmp/log /var/tmp/07reshape5intr.log
++ echo '## lkp-hsw-d05: saving dmesg.'
++ dmesg -c
++ echo '## lkp-hsw-d05: saving proc mdstat.'
++ cat /proc/mdstat
++ array=($(mdadm -Ds | cut -d' ' -f2))
+++ mdadm -Ds
+++ rm -f /var/tmp/stderr
+++ cut '-d ' -f2
+++ case $* in
+++ case $* in
+++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet -Ds
+++ rv=0
+++ case $* in
+++ cat /var/tmp/stderr
+++ return 0
++ '[' fail == fail ']'
++ echo 'FAILED - see /var/tmp/07reshape5intr.log and /var/tmp/fail07reshape5intr.log for details'
FAILED - see /var/tmp/07reshape5intr.log and /var/tmp/fail07reshape5intr.log for details
++ '[' loop == lvm ']'
++ '[' loop == loop -o loop == disk ']'
++ '[' '!' -z /dev/md0 -a 1 -ge 1 ']'
++ echo '## lkp-hsw-d05: mdadm -D /dev/md0'
++ /lkp/benchmarks/mdadm-selftests/mdadm -D /dev/md0
++ cat /proc/mdstat
++ grep -q 'linear\|external'
++ md_disks=($($mdadm -D -Y ${array[@]} | grep "/dev/" | cut -d'=' -f2))
+++ /lkp/benchmarks/mdadm-selftests/mdadm -D -Y /dev/md0
+++ grep /dev/
+++ cut -d= -f2
++ cat /proc/mdstat
++ grep -q bitmap
++ '[' 1 -eq 0 ']'
++ exit 2
+ . /lkp/benchmarks/mdadm-selftests/tests/07reshape5intr
++ set -x
++ devs=/dev/loop1
++ st=UU
++ for disks in 2 3 4 5
++ eval 'devs="/dev/loop1' '$dev2"'
+++ devs='/dev/loop1 /dev/loop2'
++ st=UUU
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop1 bs=1024
dd: error writing '/dev/loop1': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.860819 s, 23.8 MB/s
++ true
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop2 bs=1024
dd: error writing '/dev/loop2': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.648983 s, 31.6 MB/s
++ true
++ case $disks in
++ chunk=1024
++ mdadm -CR /dev/md0 -amd -l5 -c 1024 -n2 --assume-clean /dev/loop1 /dev/loop2
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ for args in $*
++ [[ -CR =~ /dev/ ]]
++ for args in $*
++ [[ /dev/md0 =~ /dev/ ]]
++ [[ /dev/md0 =~ md ]]
++ for args in $*
++ [[ -amd =~ /dev/ ]]
++ for args in $*
++ [[ -l5 =~ /dev/ ]]
++ for args in $*
++ [[ -c =~ /dev/ ]]
++ for args in $*
++ [[ 1024 =~ /dev/ ]]
++ for args in $*
++ [[ -n2 =~ /dev/ ]]
++ for args in $*
++ [[ --assume-clean =~ /dev/ ]]
++ for args in $*
++ [[ /dev/loop1 =~ /dev/ ]]
++ [[ /dev/loop1 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop1
mdadm: Unrecognised md component device - /dev/loop1
++ for args in $*
++ [[ /dev/loop2 =~ /dev/ ]]
++ [[ /dev/loop2 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop2
mdadm: Unrecognised md component device - /dev/loop2
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet -CR /dev/md0 -amd -l5 -c 1024 -n2 --assume-clean /dev/loop1 /dev/loop2 --auto=yes
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ mdadm /dev/md0 --add /dev/loop6
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet /dev/md0 --add /dev/loop6
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ echo 20
++ echo 20
++ mdadm --grow /dev/md0 -n 3
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet --grow /dev/md0 -n 3
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ check reshape
++ case $1 in
++ cnt=5
++ grep -sq reshape /proc/mdstat
++ '[' 5 -gt 0 ']'
++ grep -v idle /sys/block/md0/md/sync_action
++ sleep 0.5
++ cnt=4
++ grep -sq reshape /proc/mdstat
++ check state UUU
++ case $1 in
++ grep -sq 'blocks.*\[UUU\]$' /proc/mdstat
++ sleep 0.5
++ mdadm --stop /dev/md0
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet --stop /dev/md0
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ mdadm --assemble /dev/md0 /dev/loop1 /dev/loop2 /dev/loop6
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet --assemble /dev/md0 /dev/loop1 /dev/loop2 /dev/loop6
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ check reshape
++ case $1 in
++ cnt=5
++ grep -sq reshape /proc/mdstat
++ echo 1000
++ echo 2000
++ check wait
++ case $1 in
+++ cat /proc/sys/dev/raid/speed_limit_max
++ p=2000
++ echo 2000000
++ sleep 0.1
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ grep -v idle /sys/block/md0/md/sync_action
++ echo 2000
++ echo check
++ check wait
++ case $1 in
+++ cat /proc/sys/dev/raid/speed_limit_max
++ p=2000
++ echo 2000000
++ sleep 0.1
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ grep -v idle /sys/block/md0/md/sync_action
++ echo 2000
+++ cat /sys/block/md0/md/mismatch_cnt
++ mm=0
++ '[' 0 -gt 0 ']'
++ mdadm -S /dev/md0
++ rm -f /var/tmp/stderr
++ case $* in
++ udevadm settle
+++ cat /proc/sys/dev/raid/speed_limit_max
++ p=2000
++ echo 20000
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet -S /dev/md0
++ rv=0
++ case $* in
++ udevadm settle
++ echo 2000
++ cat /var/tmp/stderr
++ return 0
++ for disks in 2 3 4 5
++ eval 'devs="/dev/loop1' /dev/loop2 '$dev3"'
+++ devs='/dev/loop1 /dev/loop2 /dev/loop3'
++ st=UUUU
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop1 bs=1024
dd: error writing '/dev/loop1': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.430464 s, 47.6 MB/s
++ true
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop2 bs=1024
dd: error writing '/dev/loop2': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.44228 s, 46.3 MB/s
++ true
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop3 bs=1024
dd: error writing '/dev/loop3': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.442369 s, 46.3 MB/s
++ true
++ case $disks in
++ chunk=1024
++ mdadm -CR /dev/md0 -amd -l5 -c 1024 -n3 --assume-clean /dev/loop1 /dev/loop2 /dev/loop3
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ for args in $*
++ [[ -CR =~ /dev/ ]]
++ for args in $*
++ [[ /dev/md0 =~ /dev/ ]]
++ [[ /dev/md0 =~ md ]]
++ for args in $*
++ [[ -amd =~ /dev/ ]]
++ for args in $*
++ [[ -l5 =~ /dev/ ]]
++ for args in $*
++ [[ -c =~ /dev/ ]]
++ for args in $*
++ [[ 1024 =~ /dev/ ]]
++ for args in $*
++ [[ -n3 =~ /dev/ ]]
++ for args in $*
++ [[ --assume-clean =~ /dev/ ]]
++ for args in $*
++ [[ /dev/loop1 =~ /dev/ ]]
++ [[ /dev/loop1 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop1
mdadm: Unrecognised md component device - /dev/loop1
++ for args in $*
++ [[ /dev/loop2 =~ /dev/ ]]
++ [[ /dev/loop2 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop2
mdadm: Unrecognised md component device - /dev/loop2
++ for args in $*
++ [[ /dev/loop3 =~ /dev/ ]]
++ [[ /dev/loop3 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop3
mdadm: Unrecognised md component device - /dev/loop3
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet -CR /dev/md0 -amd -l5 -c 1024 -n3 --assume-clean /dev/loop1 /dev/loop2 /dev/loop3 --auto=yes
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ mdadm /dev/md0 --add /dev/loop6
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet /dev/md0 --add /dev/loop6
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ echo 20
++ echo 20
++ mdadm --grow /dev/md0 -n 4
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet --grow /dev/md0 -n 4
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
mdadm: Need to backup 6144K of critical section..
++ return 0
++ check reshape
++ case $1 in
++ cnt=5
++ grep -sq reshape /proc/mdstat
++ '[' 5 -gt 0 ']'
++ grep -v idle /sys/block/md0/md/sync_action
++ sleep 0.5
++ cnt=4
++ grep -sq reshape /proc/mdstat
++ check state UUUU
++ case $1 in
++ grep -sq 'blocks.*\[UUUU\]$' /proc/mdstat
++ sleep 0.5
++ mdadm --stop /dev/md0
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet --stop /dev/md0
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ mdadm --assemble /dev/md0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop6
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet --assemble /dev/md0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop6
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ check reshape
++ case $1 in
++ cnt=5
++ grep -sq reshape /proc/mdstat
++ echo 1000
++ echo 2000
++ check wait
++ case $1 in
+++ cat /proc/sys/dev/raid/speed_limit_max
++ p=2000
++ echo 2000000
++ sleep 0.1
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ grep -v idle /sys/block/md0/md/sync_action
++ echo 2000
++ echo check
++ check wait
++ case $1 in
+++ cat /proc/sys/dev/raid/speed_limit_max
++ p=2000
++ echo 2000000
++ sleep 0.1
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ grep -v idle /sys/block/md0/md/sync_action
++ echo 2000
+++ cat /sys/block/md0/md/mismatch_cnt
++ mm=0
++ '[' 0 -gt 0 ']'
++ mdadm -S /dev/md0
++ rm -f /var/tmp/stderr
++ case $* in
++ udevadm settle
+++ cat /proc/sys/dev/raid/speed_limit_max
++ p=2000
++ echo 20000
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet -S /dev/md0
++ rv=0
++ case $* in
++ udevadm settle
++ echo 2000
++ cat /var/tmp/stderr
++ return 0
++ for disks in 2 3 4 5
++ eval 'devs="/dev/loop1' /dev/loop2 /dev/loop3 '$dev4"'
+++ devs='/dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4'
++ st=UUUUU
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop1 bs=1024
dd: error writing '/dev/loop1': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.406947 s, 50.3 MB/s
++ true
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop2 bs=1024
dd: error writing '/dev/loop2': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.415534 s, 49.3 MB/s
++ true
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop3 bs=1024
dd: error writing '/dev/loop3': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.474736 s, 43.1 MB/s
++ true
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop4 bs=1024
dd: error writing '/dev/loop4': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.467228 s, 43.8 MB/s
++ true
++ case $disks in
++ chunk=512
++ mdadm -CR /dev/md0 -amd -l5 -c 512 -n4 --assume-clean /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ for args in $*
++ [[ -CR =~ /dev/ ]]
++ for args in $*
++ [[ /dev/md0 =~ /dev/ ]]
++ [[ /dev/md0 =~ md ]]
++ for args in $*
++ [[ -amd =~ /dev/ ]]
++ for args in $*
++ [[ -l5 =~ /dev/ ]]
++ for args in $*
++ [[ -c =~ /dev/ ]]
++ for args in $*
++ [[ 512 =~ /dev/ ]]
++ for args in $*
++ [[ -n4 =~ /dev/ ]]
++ for args in $*
++ [[ --assume-clean =~ /dev/ ]]
++ for args in $*
++ [[ /dev/loop1 =~ /dev/ ]]
++ [[ /dev/loop1 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop1
mdadm: Unrecognised md component device - /dev/loop1
++ for args in $*
++ [[ /dev/loop2 =~ /dev/ ]]
++ [[ /dev/loop2 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop2
mdadm: Unrecognised md component device - /dev/loop2
++ for args in $*
++ [[ /dev/loop3 =~ /dev/ ]]
++ [[ /dev/loop3 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop3
mdadm: Unrecognised md component device - /dev/loop3
++ for args in $*
++ [[ /dev/loop4 =~ /dev/ ]]
++ [[ /dev/loop4 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop4
mdadm: Unrecognised md component device - /dev/loop4
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet -CR /dev/md0 -amd -l5 -c 512 -n4 --assume-clean /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4 --auto=yes
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ mdadm /dev/md0 --add /dev/loop6
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet /dev/md0 --add /dev/loop6
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ echo 20
++ echo 20
++ mdadm --grow /dev/md0 -n 5
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet --grow /dev/md0 -n 5
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
mdadm: Need to backup 6144K of critical section..
++ return 0
++ check reshape
++ case $1 in
++ cnt=5
++ grep -sq reshape /proc/mdstat
++ '[' 5 -gt 0 ']'
++ grep -v idle /sys/block/md0/md/sync_action
++ sleep 0.5
++ cnt=4
++ grep -sq reshape /proc/mdstat
++ check state UUUUU
++ case $1 in
++ grep -sq 'blocks.*\[UUUUU\]$' /proc/mdstat
++ sleep 0.5
++ mdadm --stop /dev/md0
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet --stop /dev/md0
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ mdadm --assemble /dev/md0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4 /dev/loop6
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet --assemble /dev/md0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4 /dev/loop6
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ check reshape
++ case $1 in
++ cnt=5
++ grep -sq reshape /proc/mdstat
++ echo 1000
++ echo 2000
++ check wait
++ case $1 in
+++ cat /proc/sys/dev/raid/speed_limit_max
++ p=2000
++ echo 2000000
++ sleep 0.1
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ grep -v idle /sys/block/md0/md/sync_action
++ echo 2000
++ echo check
++ check wait
++ case $1 in
+++ cat /proc/sys/dev/raid/speed_limit_max
++ p=2000
++ echo 2000000
++ sleep 0.1
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ grep -v idle /sys/block/md0/md/sync_action
++ echo 2000
+++ cat /sys/block/md0/md/mismatch_cnt
++ mm=0
++ '[' 0 -gt 0 ']'
++ mdadm -S /dev/md0
++ rm -f /var/tmp/stderr
++ case $* in
++ udevadm settle
+++ cat /proc/sys/dev/raid/speed_limit_max
++ p=2000
++ echo 20000
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet -S /dev/md0
++ rv=0
++ case $* in
++ udevadm settle
++ echo 2000
++ cat /var/tmp/stderr
++ return 0
++ for disks in 2 3 4 5
++ eval 'devs="/dev/loop1' /dev/loop2 /dev/loop3 /dev/loop4 '$dev5"'
+++ devs='/dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4 /dev/loop5'
++ st=UUUUUU
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop1 bs=1024
dd: error writing '/dev/loop1': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.579398 s, 35.3 MB/s
++ true
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop2 bs=1024
dd: error writing '/dev/loop2': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.459501 s, 44.6 MB/s
++ true
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop3 bs=1024
dd: error writing '/dev/loop3': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.460076 s, 44.5 MB/s
++ true
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop4 bs=1024
dd: error writing '/dev/loop4': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.468438 s, 43.7 MB/s
++ true
++ for d in $devs
++ dd if=/dev/urandom of=/dev/loop5 bs=1024
dd: error writing '/dev/loop5': No space left on device
20001+0 records in
20000+0 records out
20480000 bytes (20 MB, 20 MiB) copied, 0.443522 s, 46.2 MB/s
++ true
++ case $disks in
++ chunk=256
++ mdadm -CR /dev/md0 -amd -l5 -c 256 -n5 --assume-clean /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4 /dev/loop5
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ for args in $*
++ [[ -CR =~ /dev/ ]]
++ for args in $*
++ [[ /dev/md0 =~ /dev/ ]]
++ [[ /dev/md0 =~ md ]]
++ for args in $*
++ [[ -amd =~ /dev/ ]]
++ for args in $*
++ [[ -l5 =~ /dev/ ]]
++ for args in $*
++ [[ -c =~ /dev/ ]]
++ for args in $*
++ [[ 256 =~ /dev/ ]]
++ for args in $*
++ [[ -n5 =~ /dev/ ]]
++ for args in $*
++ [[ --assume-clean =~ /dev/ ]]
++ for args in $*
++ [[ /dev/loop1 =~ /dev/ ]]
++ [[ /dev/loop1 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop1
mdadm: Unrecognised md component device - /dev/loop1
++ for args in $*
++ [[ /dev/loop2 =~ /dev/ ]]
++ [[ /dev/loop2 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop2
mdadm: Unrecognised md component device - /dev/loop2
++ for args in $*
++ [[ /dev/loop3 =~ /dev/ ]]
++ [[ /dev/loop3 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop3
mdadm: Unrecognised md component device - /dev/loop3
++ for args in $*
++ [[ /dev/loop4 =~ /dev/ ]]
++ [[ /dev/loop4 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop4
mdadm: Unrecognised md component device - /dev/loop4
++ for args in $*
++ [[ /dev/loop5 =~ /dev/ ]]
++ [[ /dev/loop5 =~ md ]]
++ /lkp/benchmarks/mdadm-selftests/mdadm --zero /dev/loop5
mdadm: Unrecognised md component device - /dev/loop5
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet -CR /dev/md0 -amd -l5 -c 256 -n5 --assume-clean /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4 /dev/loop5 --auto=yes
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ mdadm /dev/md0 --add /dev/loop6
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet /dev/md0 --add /dev/loop6
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ echo 20
++ echo 20
++ mdadm --grow /dev/md0 -n 6
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet --grow /dev/md0 -n 6
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
mdadm: Need to backup 5120K of critical section..
++ return 0
++ check reshape
++ case $1 in
++ cnt=5
++ grep -sq reshape /proc/mdstat
++ '[' 5 -gt 0 ']'
++ grep -v idle /sys/block/md0/md/sync_action
++ sleep 0.5
++ cnt=4
++ grep -sq reshape /proc/mdstat
++ check state UUUUUU
++ case $1 in
++ grep -sq 'blocks.*\[UUUUUU\]$' /proc/mdstat
++ sleep 0.5
++ mdadm --stop /dev/md0
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet --stop /dev/md0
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ mdadm --assemble /dev/md0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4 /dev/loop5 /dev/loop6
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet --assemble /dev/md0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4 /dev/loop5 /dev/loop6
mdadm: restoring critical section
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ check reshape
++ case $1 in
++ cnt=5
++ grep -sq reshape /proc/mdstat
++ echo 1000
++ echo 2000
++ check wait
++ case $1 in
+++ cat /proc/sys/dev/raid/speed_limit_max
++ p=2000
++ echo 2000000
++ sleep 0.1
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ grep -v idle /sys/block/md0/md/sync_action
++ echo 2000
++ echo check
++ check wait
++ case $1 in
+++ cat /proc/sys/dev/raid/speed_limit_max
++ p=2000
++ echo 2000000
++ sleep 0.1
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sleep 0.5
++ grep -Eq '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ grep -v idle /sys/block/md0/md/sync_action
++ echo 2000
+++ cat /sys/block/md0/md/mismatch_cnt
++ mm=0
++ '[' 0 -gt 0 ']'
++ mdadm -S /dev/md0
++ rm -f /var/tmp/stderr
++ case $* in
++ udevadm settle
+++ cat /proc/sys/dev/raid/speed_limit_max
++ p=2000
++ echo 20000
++ case $* in
++ /lkp/benchmarks/mdadm-selftests/mdadm --quiet -S /dev/md0
++ rv=0
++ case $* in
++ udevadm settle
++ echo 2000
++ cat /var/tmp/stderr
++ return 0
Yu Kuai May 21, 2024, 3:11 a.m. UTC | #4
Hi,

在 2024/05/21 11:01, Oliver Sang 写道:
> hi, Yu Kuai,
> 
> On Tue, May 21, 2024 at 10:20:54AM +0800, Yu Kuai wrote:
>> Hi,
>>
>> 在 2024/05/20 23:01, kernel test robot 写道:
>>>
>>>
>>> Hello,
>>>
>>> kernel test robot noticed "mdadm-selftests.07reshape5intr.fail" on:
>>>
>>> commit: 18effaab5f57ef44763e537c782f905e06f6c4f5 ("[PATCH md-6.10 5/9] md: replace sysfs api sync_action with new helpers")
>>> url: https://github.com/intel-lab-lkp/linux/commits/Yu-Kuai/md-rearrange-recovery_flage/20240509-093248
>>> base: https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git for-next
>>> patch link: https://lore.kernel.org/all/20240509011900.2694291-6-yukuai1@huaweicloud.com/
>>> patch subject: [PATCH md-6.10 5/9] md: replace sysfs api sync_action with new helpers
>>>
>>> in testcase: mdadm-selftests
>>> version: mdadm-selftests-x86_64-5f41845-1_20240412
>>> with following parameters:
>>>
>>> 	disk: 1HDD
>>> 	test_prefix: 07reshape5intr
>>>
>>>
>>>
>>> compiler: gcc-13
>>> test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4790T CPU @ 2.70GHz (Haswell) with 16G memory
>>>
>>> (please refer to attached dmesg/kmsg for entire log/backtrace)
>>>
>>>
>>>
>>>
>>> If you fix the issue in a separate patch/commit (i.e. not just a new version of
>>> the same patch/commit), kindly add following tags
>>> | Reported-by: kernel test robot <oliver.sang@intel.com>
>>> | Closes: https://lore.kernel.org/oe-lkp/202405202204.4e3dc662-oliver.sang@intel.com
>>>
>>> 2024-05-14 21:36:26 mkdir -p /var/tmp
>>> 2024-05-14 21:36:26 mke2fs -t ext3 -b 4096 -J size=4 -q /dev/sda1
>>> 2024-05-14 21:36:57 mount -t ext3 /dev/sda1 /var/tmp
>>> sed -e 's/{DEFAULT_METADATA}/1.2/g' \
>>> -e 's,{MAP_PATH},/run/mdadm/map,g'  mdadm.8.in > mdadm.8
>>> /usr/bin/install -D -m 644 mdadm.8 /usr/share/man/man8/mdadm.8
>>> /usr/bin/install -D -m 644 mdmon.8 /usr/share/man/man8/mdmon.8
>>> /usr/bin/install -D -m 644 md.4 /usr/share/man/man4/md.4
>>> /usr/bin/install -D -m 644 mdadm.conf.5 /usr/share/man/man5/mdadm.conf.5
>>> /usr/bin/install -D -m 644 udev-md-raid-creating.rules /lib/udev/rules.d/01-md-raid-creating.rules
>>> /usr/bin/install -D -m 644 udev-md-raid-arrays.rules /lib/udev/rules.d/63-md-raid-arrays.rules
>>> /usr/bin/install -D -m 644 udev-md-raid-assembly.rules /lib/udev/rules.d/64-md-raid-assembly.rules
>>> /usr/bin/install -D -m 644 udev-md-clustered-confirm-device.rules /lib/udev/rules.d/69-md-clustered-confirm-device.rules
>>> /usr/bin/install -D  -m 755 mdadm /sbin/mdadm
>>> /usr/bin/install -D  -m 755 mdmon /sbin/mdmon
>>> Testing on linux-6.9.0-rc2-00012-g18effaab5f57 kernel
>>> /lkp/benchmarks/mdadm-selftests/tests/07reshape5intr... FAILED - see /var/tmp/07reshape5intr.log and /var/tmp/fail07reshape5intr.log for detail
>> [root@fedora mdadm]# ./test --dev=loop --tests=07reshape5intr
>> test: skipping tests for multipath, which is removed in upstream 6.8+
>> kernels
>> test: skipping tests for linear, which is removed in upstream 6.8+ kernels
>> Testing on linux-6.9.0-rc2-00023-gf092583596a2 kernel
>> /root/mdadm/tests/07reshape5intr... FAILED - see /var/tmp/07reshape5intr.log
>> and /var/tmp/fail07reshape5intr.log for details
>>    (KNOWN BROKEN TEST: always fails)
>>
>> So, since this test is marked BROKEN.
>>
>> Please share the whole log, and is it possible to share the two logs?
> 
> 
> we only captured one log as attached log-18effaab5f.
> also attached parent log FYI.
I mean please ignore the BROKEN test, and next time attach the two logs
if possible:

/var/tmp/07reshape5intr.log and /var/tmp/fail07reshape5intr.log

Thanks for the test, we really need a per patch CI.
Kuai

> 
> 
>>
>> Thanks,
>> Kuai
>>
>>>
>>>
>>>
>>> The kernel config and materials to reproduce are available at:
>>> https://download.01.org/0day-ci/archive/20240520/202405202204.4e3dc662-oliver.sang@intel.com
>>>
>>>
>>>
>>
Xiao Ni May 21, 2024, 3:21 a.m. UTC | #5
Hi Kuai

I've tested 07reshape5intr with the latest upstream kernel 15 times
without failure. So it's better to have a try with 07reshape5intr with
your patch set.

Regards
Xiao




On Tue, May 21, 2024 at 11:02 AM Oliver Sang <oliver.sang@intel.com> wrote:
>
> hi, Yu Kuai,
>
> On Tue, May 21, 2024 at 10:20:54AM +0800, Yu Kuai wrote:
> > Hi,
> >
> > 在 2024/05/20 23:01, kernel test robot 写道:
> > >
> > >
> > > Hello,
> > >
> > > kernel test robot noticed "mdadm-selftests.07reshape5intr.fail" on:
> > >
> > > commit: 18effaab5f57ef44763e537c782f905e06f6c4f5 ("[PATCH md-6.10 5/9] md: replace sysfs api sync_action with new helpers")
> > > url: https://github.com/intel-lab-lkp/linux/commits/Yu-Kuai/md-rearrange-recovery_flage/20240509-093248
> > > base: https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git for-next
> > > patch link: https://lore.kernel.org/all/20240509011900.2694291-6-yukuai1@huaweicloud.com/
> > > patch subject: [PATCH md-6.10 5/9] md: replace sysfs api sync_action with new helpers
> > >
> > > in testcase: mdadm-selftests
> > > version: mdadm-selftests-x86_64-5f41845-1_20240412
> > > with following parameters:
> > >
> > >     disk: 1HDD
> > >     test_prefix: 07reshape5intr
> > >
> > >
> > >
> > > compiler: gcc-13
> > > test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4790T CPU @ 2.70GHz (Haswell) with 16G memory
> > >
> > > (please refer to attached dmesg/kmsg for entire log/backtrace)
> > >
> > >
> > >
> > >
> > > If you fix the issue in a separate patch/commit (i.e. not just a new version of
> > > the same patch/commit), kindly add following tags
> > > | Reported-by: kernel test robot <oliver.sang@intel.com>
> > > | Closes: https://lore.kernel.org/oe-lkp/202405202204.4e3dc662-oliver.sang@intel.com
> > >
> > > 2024-05-14 21:36:26 mkdir -p /var/tmp
> > > 2024-05-14 21:36:26 mke2fs -t ext3 -b 4096 -J size=4 -q /dev/sda1
> > > 2024-05-14 21:36:57 mount -t ext3 /dev/sda1 /var/tmp
> > > sed -e 's/{DEFAULT_METADATA}/1.2/g' \
> > > -e 's,{MAP_PATH},/run/mdadm/map,g'  mdadm.8.in > mdadm.8
> > > /usr/bin/install -D -m 644 mdadm.8 /usr/share/man/man8/mdadm.8
> > > /usr/bin/install -D -m 644 mdmon.8 /usr/share/man/man8/mdmon.8
> > > /usr/bin/install -D -m 644 md.4 /usr/share/man/man4/md.4
> > > /usr/bin/install -D -m 644 mdadm.conf.5 /usr/share/man/man5/mdadm.conf.5
> > > /usr/bin/install -D -m 644 udev-md-raid-creating.rules /lib/udev/rules.d/01-md-raid-creating.rules
> > > /usr/bin/install -D -m 644 udev-md-raid-arrays.rules /lib/udev/rules.d/63-md-raid-arrays.rules
> > > /usr/bin/install -D -m 644 udev-md-raid-assembly.rules /lib/udev/rules.d/64-md-raid-assembly.rules
> > > /usr/bin/install -D -m 644 udev-md-clustered-confirm-device.rules /lib/udev/rules.d/69-md-clustered-confirm-device.rules
> > > /usr/bin/install -D  -m 755 mdadm /sbin/mdadm
> > > /usr/bin/install -D  -m 755 mdmon /sbin/mdmon
> > > Testing on linux-6.9.0-rc2-00012-g18effaab5f57 kernel
> > > /lkp/benchmarks/mdadm-selftests/tests/07reshape5intr... FAILED - see /var/tmp/07reshape5intr.log and /var/tmp/fail07reshape5intr.log for detail
> > [root@fedora mdadm]# ./test --dev=loop --tests=07reshape5intr
> > test: skipping tests for multipath, which is removed in upstream 6.8+
> > kernels
> > test: skipping tests for linear, which is removed in upstream 6.8+ kernels
> > Testing on linux-6.9.0-rc2-00023-gf092583596a2 kernel
> > /root/mdadm/tests/07reshape5intr... FAILED - see /var/tmp/07reshape5intr.log
> > and /var/tmp/fail07reshape5intr.log for details
> >   (KNOWN BROKEN TEST: always fails)
> >
> > So, since this test is marked BROKEN.
> >
> > Please share the whole log, and is it possible to share the two logs?
>
>
> we only captured one log as attached log-18effaab5f.
> also attached parent log FYI.
>
>
> >
> > Thanks,
> > Kuai
> >
> > >
> > >
> > >
> > > The kernel config and materials to reproduce are available at:
> > > https://download.01.org/0day-ci/archive/20240520/202405202204.4e3dc662-oliver.sang@intel.com
> > >
> > >
> > >
> >
Yu Kuai May 22, 2024, 2:46 a.m. UTC | #6
Hi,

在 2024/05/21 11:21, Xiao Ni 写道:
> Hi Kuai
> 
> I've tested 07reshape5intr with the latest upstream kernel 15 times
> without failure. So it's better to have a try with 07reshape5intr with
> your patch set.

I just discussed with Xiao on slack, for conclusion here:

The test 07reshape5intr will add a new disk to array, then start
reshape:

mdadm /dev/md0 --add /dev/xxx
mdadm --grow /dev/md0 -n 3

However, the grow will fail:
mdadm: Failed to initiate reshape!

Root cause is that in kernel, action_store() will return -EBUSY
if MD_RECOVERY_RUNNING is set:

// mdadm add
add_bound_rdev
  set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);

// daemon thread
md_check_recovery
  set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
  // do nothing
		// mdadm grow
		action_store
		 if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
		  return -EBUSY
  clear_bit(MD_RECOVERY_RUNNING, &mddev->recovery)

This is a long term problem, and we need new synchronization in kernel
to make sure the grow won't fail.

Thanks,
Kuai

> 
> Regards
> Xiao
> 
> 
> 
> 
> On Tue, May 21, 2024 at 11:02 AM Oliver Sang <oliver.sang@intel.com> wrote:
>>
>> hi, Yu Kuai,
>>
>> On Tue, May 21, 2024 at 10:20:54AM +0800, Yu Kuai wrote:
>>> Hi,
>>>
>>> 在 2024/05/20 23:01, kernel test robot 写道:
>>>>
>>>>
>>>> Hello,
>>>>
>>>> kernel test robot noticed "mdadm-selftests.07reshape5intr.fail" on:
>>>>
>>>> commit: 18effaab5f57ef44763e537c782f905e06f6c4f5 ("[PATCH md-6.10 5/9] md: replace sysfs api sync_action with new helpers")
>>>> url: https://github.com/intel-lab-lkp/linux/commits/Yu-Kuai/md-rearrange-recovery_flage/20240509-093248
>>>> base: https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git for-next
>>>> patch link: https://lore.kernel.org/all/20240509011900.2694291-6-yukuai1@huaweicloud.com/
>>>> patch subject: [PATCH md-6.10 5/9] md: replace sysfs api sync_action with new helpers
>>>>
>>>> in testcase: mdadm-selftests
>>>> version: mdadm-selftests-x86_64-5f41845-1_20240412
>>>> with following parameters:
>>>>
>>>>      disk: 1HDD
>>>>      test_prefix: 07reshape5intr
>>>>
>>>>
>>>>
>>>> compiler: gcc-13
>>>> test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4790T CPU @ 2.70GHz (Haswell) with 16G memory
>>>>
>>>> (please refer to attached dmesg/kmsg for entire log/backtrace)
>>>>
>>>>
>>>>
>>>>
>>>> If you fix the issue in a separate patch/commit (i.e. not just a new version of
>>>> the same patch/commit), kindly add following tags
>>>> | Reported-by: kernel test robot <oliver.sang@intel.com>
>>>> | Closes: https://lore.kernel.org/oe-lkp/202405202204.4e3dc662-oliver.sang@intel.com
>>>>
>>>> 2024-05-14 21:36:26 mkdir -p /var/tmp
>>>> 2024-05-14 21:36:26 mke2fs -t ext3 -b 4096 -J size=4 -q /dev/sda1
>>>> 2024-05-14 21:36:57 mount -t ext3 /dev/sda1 /var/tmp
>>>> sed -e 's/{DEFAULT_METADATA}/1.2/g' \
>>>> -e 's,{MAP_PATH},/run/mdadm/map,g'  mdadm.8.in > mdadm.8
>>>> /usr/bin/install -D -m 644 mdadm.8 /usr/share/man/man8/mdadm.8
>>>> /usr/bin/install -D -m 644 mdmon.8 /usr/share/man/man8/mdmon.8
>>>> /usr/bin/install -D -m 644 md.4 /usr/share/man/man4/md.4
>>>> /usr/bin/install -D -m 644 mdadm.conf.5 /usr/share/man/man5/mdadm.conf.5
>>>> /usr/bin/install -D -m 644 udev-md-raid-creating.rules /lib/udev/rules.d/01-md-raid-creating.rules
>>>> /usr/bin/install -D -m 644 udev-md-raid-arrays.rules /lib/udev/rules.d/63-md-raid-arrays.rules
>>>> /usr/bin/install -D -m 644 udev-md-raid-assembly.rules /lib/udev/rules.d/64-md-raid-assembly.rules
>>>> /usr/bin/install -D -m 644 udev-md-clustered-confirm-device.rules /lib/udev/rules.d/69-md-clustered-confirm-device.rules
>>>> /usr/bin/install -D  -m 755 mdadm /sbin/mdadm
>>>> /usr/bin/install -D  -m 755 mdmon /sbin/mdmon
>>>> Testing on linux-6.9.0-rc2-00012-g18effaab5f57 kernel
>>>> /lkp/benchmarks/mdadm-selftests/tests/07reshape5intr... FAILED - see /var/tmp/07reshape5intr.log and /var/tmp/fail07reshape5intr.log for detail
>>> [root@fedora mdadm]# ./test --dev=loop --tests=07reshape5intr
>>> test: skipping tests for multipath, which is removed in upstream 6.8+
>>> kernels
>>> test: skipping tests for linear, which is removed in upstream 6.8+ kernels
>>> Testing on linux-6.9.0-rc2-00023-gf092583596a2 kernel
>>> /root/mdadm/tests/07reshape5intr... FAILED - see /var/tmp/07reshape5intr.log
>>> and /var/tmp/fail07reshape5intr.log for details
>>>    (KNOWN BROKEN TEST: always fails)
>>>
>>> So, since this test is marked BROKEN.
>>>
>>> Please share the whole log, and is it possible to share the two logs?
>>
>>
>> we only captured one log as attached log-18effaab5f.
>> also attached parent log FYI.
>>
>>
>>>
>>> Thanks,
>>> Kuai
>>>
>>>>
>>>>
>>>>
>>>> The kernel config and materials to reproduce are available at:
>>>> https://download.01.org/0day-ci/archive/20240520/202405202204.4e3dc662-oliver.sang@intel.com
>>>>
>>>>
>>>>
>>>
> 
> 
> .
>
diff mbox series

Patch

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 7600da89d909..da6c94f03efb 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -4934,27 +4934,9 @@  char *md_sync_action_name(enum sync_action action)
 static ssize_t
 action_show(struct mddev *mddev, char *page)
 {
-	char *type = "idle";
-	unsigned long recovery = mddev->recovery;
-	if (test_bit(MD_RECOVERY_FROZEN, &recovery))
-		type = "frozen";
-	else if (test_bit(MD_RECOVERY_RUNNING, &recovery) ||
-	    (md_is_rdwr(mddev) && test_bit(MD_RECOVERY_NEEDED, &recovery))) {
-		if (test_bit(MD_RECOVERY_RESHAPE, &recovery))
-			type = "reshape";
-		else if (test_bit(MD_RECOVERY_SYNC, &recovery)) {
-			if (!test_bit(MD_RECOVERY_REQUESTED, &recovery))
-				type = "resync";
-			else if (test_bit(MD_RECOVERY_CHECK, &recovery))
-				type = "check";
-			else
-				type = "repair";
-		} else if (test_bit(MD_RECOVERY_RECOVER, &recovery))
-			type = "recover";
-		else if (mddev->reshape_position != MaxSector)
-			type = "reshape";
-	}
-	return sprintf(page, "%s\n", type);
+	enum sync_action action = md_sync_action(mddev);
+
+	return sprintf(page, "%s\n", md_sync_action_name(action));
 }
 
 /**
@@ -5097,35 +5079,63 @@  static int mddev_start_reshape(struct mddev *mddev)
 static ssize_t
 action_store(struct mddev *mddev, const char *page, size_t len)
 {
+	int ret;
+	enum sync_action action;
+
 	if (!mddev->pers || !mddev->pers->sync_request)
 		return -EINVAL;
 
+	action = md_sync_action_by_name(page);
 
-	if (cmd_match(page, "idle"))
-		idle_sync_thread(mddev);
-	else if (cmd_match(page, "frozen"))
-		frozen_sync_thread(mddev);
-	else if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
-		return -EBUSY;
-	else if (cmd_match(page, "resync"))
-		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
-	else if (cmd_match(page, "recover")) {
-		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
-		set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
-	} else if (cmd_match(page, "reshape")) {
-		int err = mddev_start_reshape(mddev);
-
-		if (err)
-			return err;
+	/* TODO: mdadm rely on "idle" to start sync_thread. */
+	if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) {
+		switch (action) {
+		case ACTION_FROZEN:
+			frozen_sync_thread(mddev);
+			return len;
+		case ACTION_IDLE:
+			idle_sync_thread(mddev);
+			break;
+		case ACTION_RESHAPE:
+		case ACTION_RECOVER:
+		case ACTION_CHECK:
+		case ACTION_REPAIR:
+		case ACTION_RESYNC:
+			return -EBUSY;
+		default:
+			return -EINVAL;
+		}
 	} else {
-		if (cmd_match(page, "check"))
+		switch (action) {
+		case ACTION_FROZEN:
+			set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+			return len;
+		case ACTION_RESHAPE:
+			clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+			ret = mddev_start_reshape(mddev);
+			if (ret)
+				return ret;
+			break;
+		case ACTION_RECOVER:
+			clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+			set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
+			break;
+		case ACTION_CHECK:
 			set_bit(MD_RECOVERY_CHECK, &mddev->recovery);
-		else if (!cmd_match(page, "repair"))
+			fallthrough;
+		case ACTION_REPAIR:
+			set_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
+			set_bit(MD_RECOVERY_SYNC, &mddev->recovery);
+			fallthrough;
+		case ACTION_RESYNC:
+		case ACTION_IDLE:
+			clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+			break;
+		default:
 			return -EINVAL;
-		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
-		set_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
-		set_bit(MD_RECOVERY_SYNC, &mddev->recovery);
+		}
 	}
+
 	if (mddev->ro == MD_AUTO_READ) {
 		/* A write to sync_action is enough to justify
 		 * canceling read-auto mode