Message ID | 20240130021843.3608859-1-yukuai1@huaweicloud.com (mailing list archive) |
---|---|
Headers | show |
Series | dm-raid: fix v6.7 regressions | expand |
On Mon, Jan 29, 2024 at 6:23 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: > [...] > > Test result: > > I apply this patchset on top of v6.8-rc1, and run lvm2 tests suite with > folling cmd for 24 round(for about 2 days): > > for t in `ls test/shell`; do > if cat test/shell/$t | grep raid &> /dev/null; then > make check T=shell/$t > fi > done > > failed count failed test > 1 ### failed: [ndev-vanilla] shell/dmsecuretest.sh > 1 ### failed: [ndev-vanilla] shell/dmsetup-integrity-keys.sh > 1 ### failed: [ndev-vanilla] shell/dmsetup-keyring.sh > 5 ### failed: [ndev-vanilla] shell/duplicate-pvs-md0.sh > 1 ### failed: [ndev-vanilla] shell/duplicate-vgid.sh > 2 ### failed: [ndev-vanilla] shell/duplicate-vgnames.sh > 1 ### failed: [ndev-vanilla] shell/fsadm-crypt.sh > 1 ### failed: [ndev-vanilla] shell/integrity.sh > 6 ### failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh > 2 ### failed: [ndev-vanilla] shell/lvchange-rebuild-raid.sh > 5 ### failed: [ndev-vanilla] shell/lvconvert-raid-reshape-stripes-load-reload.sh > 4 ### failed: [ndev-vanilla] shell/lvconvert-raid-restripe-linear.sh > 1 ### failed: [ndev-vanilla] shell/lvconvert-raid1-split-trackchanges.sh > 20 ### failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh > 20 ### failed: [ndev-vanilla] shell/lvcreate-large-raid.sh > 24 ### failed: [ndev-vanilla] shell/lvextend-raid.sh > > And I ramdomly pick some tests verified by hand that these test will > fail in v6.6 as well(not all tests, I don't have the time do do this yet): > > shell/lvextend-raid.sh > shell/lvcreate-large-raid.sh > shell/lvconvert-repair-raid.sh > shell/lvchange-rebuild-raid.sh > shell/lvchange-raid1-writemostly.sh Hi Mikulas, Could you please advise the proper way to run the tests and to interpret the results? Are these failures on 6.6 expected? We hope to run lvm2 tests regularly for all md patches. However, Yu Kuai has spent days on this, and it seems really hard to run it properly once. Thanks, Song
On Tue, Jan 30, 2024 at 10:23 AM Yu Kuai <yukuai1@huaweicloud.com> wrote: > > From: Yu Kuai <yukuai3@huawei.com> > > Changes in v4: > - add patch 10 to fix a raid456 deadlock(for both md/raid and dm-raid); > - add patch 13 to wait for inflight IO completion while removing dm > device; > > Changes in v3: > - fix a problem in patch 5; > - add patch 12; > > Changes in v2: > - replace revert changes for dm-raid with real fixes; > - fix dm-raid5 deadlock that exist for a long time, this deadlock is > triggered because another problem is fixed in raid5, and instead of > deadlock, user will read wrong data before v6.7, patch 9-11; > > First regression related to stop sync thread: > > The lifetime of sync_thread is designed as following: > > 1) Decide want to start sync_thread, set MD_RECOVERY_NEEDED, and wake up > daemon thread; > 2) Daemon thread detect that MD_RECOVERY_NEEDED is set, then set > MD_RECOVERY_RUNNING and register sync_thread; > 3) Execute md_do_sync() for the actual work, if it's done or > interrupted, it will set MD_RECOVERY_DONE and wake up daemone thread; > 4) Daemon thread detect that MD_RECOVERY_DONE is set, then clear > MD_RECOVERY_RUNNING and unregister sync_thread; > > In v6.7, we fix md/raid to follow this design by commit f52f5c71f3d4 > ("md: fix stopping sync thread"), however, dm-raid is not considered at > that time, and following test will hang: > > shell/integrity-caching.sh > shell/lvconvert-raid-reshape.sh > > This patch set fix the broken test by patch 1-4; > - patch 1 fix that step 4) is broken by suspended array; > - patch 2 fix that step 4) is broken by read-only array; > - patch 3 fix that step 3) is broken that md_do_sync() doesn't set > MD_RECOVERY_DONE; Noted that this patch will introdece new problem that > data will be corrupted, which will be fixed in later patches. > - patch 4 fix that setp 1) is broken that sync_thread is register and > MD_RECOVERY_RUNNING is set directly, md/raid behaviour, not related to > dm-raid; > > With patch 1-4, the above test won't hang anymore, however, the test > will still fail and complain that ext4 is corrupted; > > Second regression related to frozen sync thread: > > Noted that for raid456, if reshape is interrupted, then call > "pers->start_reshape" will corrupt data. And dm-raid rely on md_do_sync() > doesn't set MD_RECOVERY_DONE so that new sync_thread won't be registered, > and patch 3 just break this. > > - Patch 5-6 fix this problem by interrupting reshape and frozen > sync_thread in dm_suspend(), then unfrozen and continue reshape in > dm_resume(). It's verified that dm-raid tests won't complain that > ext4 is corrupted anymore. > - Patch 7 fix the problem that raid_message() call > md_reap_sync_thread() directly, without holding 'reconfig_mutex'. > > Last regression related to dm-raid456 IO concurrent with reshape: > > For raid456, if reshape is still in progress, then IO across reshape > position will wait for reshape to make progress. However, for dm-raid, > in following cases reshape will never make progress hence IO will hang: > > 1) the array is read-only; > 2) MD_RECOVERY_WAIT is set; > 3) MD_RECOVERY_FROZEN is set; > > After commit c467e97f079f ("md/raid6: use valid sector values to determine > if an I/O should wait on the reshape") fix the problem that IO across > reshape position doesn't wait for reshape, the dm-raid test > shell/lvconvert-raid-reshape.sh start to hang at raid5_make_request(). > > For md/raid, the problem doesn't exist because: > > 1) If array is read-only, it can switch to read-write by ioctl/sysfs; > 2) md/raid never set MD_RECOVERY_WAIT; > 3) If MD_RECOVERY_FROZEN is set, mddev_suspend() doesn't hold > 'reconfig_mutex' anymore, it can be cleared and reshape can continue by > sysfs api 'sync_action'. > > However, I'm not sure yet how to avoid the problem in dm-raid yet. > > - patch 9-11 fix this problem by detecting the above 3 cases in > dm_suspend(), and fail those IO directly. > > If user really meet the IO error, then it means they're reading the wrong > data before c467e97f079f. And it's safe to read/write the array after > reshape make progress successfully. > > There are also some other minor changes: patch 8 and patch 12; > > Test result: > > I apply this patchset on top of v6.8-rc1, and run lvm2 tests suite with > folling cmd for 24 round(for about 2 days): > > for t in `ls test/shell`; do > if cat test/shell/$t | grep raid &> /dev/null; then > make check T=shell/$t > fi > done > > failed count failed test > 1 ### failed: [ndev-vanilla] shell/dmsecuretest.sh > 1 ### failed: [ndev-vanilla] shell/dmsetup-integrity-keys.sh > 1 ### failed: [ndev-vanilla] shell/dmsetup-keyring.sh > 5 ### failed: [ndev-vanilla] shell/duplicate-pvs-md0.sh > 1 ### failed: [ndev-vanilla] shell/duplicate-vgid.sh > 2 ### failed: [ndev-vanilla] shell/duplicate-vgnames.sh > 1 ### failed: [ndev-vanilla] shell/fsadm-crypt.sh > 1 ### failed: [ndev-vanilla] shell/integrity.sh > 6 ### failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh > 2 ### failed: [ndev-vanilla] shell/lvchange-rebuild-raid.sh > 5 ### failed: [ndev-vanilla] shell/lvconvert-raid-reshape-stripes-load-reload.sh > 4 ### failed: [ndev-vanilla] shell/lvconvert-raid-restripe-linear.sh > 1 ### failed: [ndev-vanilla] shell/lvconvert-raid1-split-trackchanges.sh > 20 ### failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh > 20 ### failed: [ndev-vanilla] shell/lvcreate-large-raid.sh > 24 ### failed: [ndev-vanilla] shell/lvextend-raid.sh > > And I ramdomly pick some tests verified by hand that these test will > fail in v6.6 as well(not all tests, I don't have the time do do this yet): > > shell/lvextend-raid.sh > shell/lvcreate-large-raid.sh > shell/lvconvert-repair-raid.sh > shell/lvchange-rebuild-raid.sh > shell/lvchange-raid1-writemostly.sh > > Yu Kuai (14): > md: don't ignore suspended array in md_check_recovery() > md: don't ignore read-only array in md_check_recovery() > md: make sure md_do_sync() will set MD_RECOVERY_DONE > md: don't register sync_thread for reshape directly > md: export helpers to stop sync_thread > dm-raid: really frozen sync_thread during suspend > md/dm-raid: don't call md_reap_sync_thread() directly > dm-raid: add a new helper prepare_suspend() in md_personality > md: export helper md_is_rdwr() > md: don't suspend the array for interrupted reshape > md/raid456: fix a deadlock for dm-raid456 while io concurrent with > reshape > dm-raid: fix lockdep waring in "pers->hot_add_disk" > dm: wait for IO completion before removing dm device > dm-raid: remove mddev_suspend/resume() > > drivers/md/dm-raid.c | 78 +++++++++++++++++++--------- > drivers/md/dm.c | 3 ++ > drivers/md/md.c | 120 +++++++++++++++++++++++++++++-------------- > drivers/md/md.h | 16 ++++++ > drivers/md/raid10.c | 16 +----- > drivers/md/raid5.c | 61 ++++++++++++---------- > 6 files changed, 190 insertions(+), 104 deletions(-) > > -- > 2.39.2 > Hi all In my environment, the lvm2 regression test has passed. There are only three failed cases which also fail in kernel 6.6. ### failed: [ndev-vanilla] shell/lvresize-fs-crypt.sh ### failed: [ndev-vanilla] shell/pvck-dump.sh ### failed: [ndev-vanilla] shell/select-report.sh ### 426 tests: 346 passed, 70 skipped, 0 timed out, 7 warned, 3 failed in 89:26.073 Best Regards Xiao
Hi, Xiao Ni! 在 2024/01/31 8:29, Xiao Ni 写道: > In my environment, the lvm2 regression test has passed. There are only > three failed cases which also fail in kernel 6.6. > > ### failed: [ndev-vanilla] shell/lvresize-fs-crypt.sh > ### failed: [ndev-vanilla] shell/pvck-dump.sh > ### failed: [ndev-vanilla] shell/select-report.sh > ### 426 tests: 346 passed, 70 skipped, 0 timed out, 7 warned, 3 failed > in 89:26.073 Thanks for the test, this is greate news. Kuai
On Wed, Jan 31, 2024 at 9:25 AM Yu Kuai <yukuai1@huaweicloud.com> wrote: > > Hi, Xiao Ni! > > 在 2024/01/31 8:29, Xiao Ni 写道: > > In my environment, the lvm2 regression test has passed. There are only > > three failed cases which also fail in kernel 6.6. > > > > ### failed: [ndev-vanilla] shell/lvresize-fs-crypt.sh > > ### failed: [ndev-vanilla] shell/pvck-dump.sh > > ### failed: [ndev-vanilla] shell/select-report.sh > > ### 426 tests: 346 passed, 70 skipped, 0 timed out, 7 warned, 3 failed > > in 89:26.073 > > Thanks for the test, this is greate news. > > Kuai > Hi Kuai Have you run mdadm regression tests based on this patch set? Regards Xiao
Hi, 在 2024/01/31 9:28, Xiao Ni 写道: > On Wed, Jan 31, 2024 at 9:25 AM Yu Kuai <yukuai1@huaweicloud.com> wrote: >> >> Hi, Xiao Ni! >> >> 在 2024/01/31 8:29, Xiao Ni 写道: >>> In my environment, the lvm2 regression test has passed. There are only >>> three failed cases which also fail in kernel 6.6. >>> >>> ### failed: [ndev-vanilla] shell/lvresize-fs-crypt.sh >>> ### failed: [ndev-vanilla] shell/pvck-dump.sh >>> ### failed: [ndev-vanilla] shell/select-report.sh >>> ### 426 tests: 346 passed, 70 skipped, 0 timed out, 7 warned, 3 failed >>> in 89:26.073 >> >> Thanks for the test, this is greate news. >> >> Kuai >> > > Hi Kuai > > Have you run mdadm regression tests based on this patch set? Of course, I'm runing in my VM with loop devices. Thanks, Kuai > > Regards > Xiao > > . >
Hi, 在 2024/01/31 10:52, Yu Kuai 写道: > Hi, > > 在 2024/01/31 9:28, Xiao Ni 写道: >> On Wed, Jan 31, 2024 at 9:25 AM Yu Kuai <yukuai1@huaweicloud.com> wrote: >>> >>> Hi, Xiao Ni! >>> >>> 在 2024/01/31 8:29, Xiao Ni 写道: >>>> In my environment, the lvm2 regression test has passed. There are only >>>> three failed cases which also fail in kernel 6.6. >>>> >>>> ### failed: [ndev-vanilla] shell/lvresize-fs-crypt.sh >>>> ### failed: [ndev-vanilla] shell/pvck-dump.sh >>>> ### failed: [ndev-vanilla] shell/select-report.sh >>>> ### 426 tests: 346 passed, 70 skipped, 0 timed out, 7 warned, 3 failed >>>> in 89:26.073 >>> >>> Thanks for the test, this is greate news. >>> >>> Kuai >>> >> >> Hi Kuai >> >> Have you run mdadm regression tests based on this patch set? BTW, I just make sure there are no new failed tests, however, there looks like some tests are broken. For example: 04update-metadata: ++ /root/mdadm/mdadm --quiet -CR --assume-clean -e 0.90 /dev/md0 --level linear -n 4 -c 64 /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 --auto=yes ++ rv=1 ++ case $* in ++ cat /var/tmp/stderr mdadm: RUN_ARRAY failed: Invalid argument 04r1update: ++ /root/mdadm/mdadm --quiet -A /dev/md0 -U resync /dev/loop0 /dev/loop1 ++ rv=1 ++ case $* in ++ cat /var/tmp/stderr mdadm: --update=resync not understood for 1.x metadata Thanks, Kuai > > Of course, I'm runing in my VM with loop devices. > > Thanks, > Kuai > >> >> Regards >> Xiao >> >> . >> > > . >
From: Yu Kuai <yukuai3@huawei.com> Changes in v4: - add patch 10 to fix a raid456 deadlock(for both md/raid and dm-raid); - add patch 13 to wait for inflight IO completion while removing dm device; Changes in v3: - fix a problem in patch 5; - add patch 12; Changes in v2: - replace revert changes for dm-raid with real fixes; - fix dm-raid5 deadlock that exist for a long time, this deadlock is triggered because another problem is fixed in raid5, and instead of deadlock, user will read wrong data before v6.7, patch 9-11; First regression related to stop sync thread: The lifetime of sync_thread is designed as following: 1) Decide want to start sync_thread, set MD_RECOVERY_NEEDED, and wake up daemon thread; 2) Daemon thread detect that MD_RECOVERY_NEEDED is set, then set MD_RECOVERY_RUNNING and register sync_thread; 3) Execute md_do_sync() for the actual work, if it's done or interrupted, it will set MD_RECOVERY_DONE and wake up daemone thread; 4) Daemon thread detect that MD_RECOVERY_DONE is set, then clear MD_RECOVERY_RUNNING and unregister sync_thread; In v6.7, we fix md/raid to follow this design by commit f52f5c71f3d4 ("md: fix stopping sync thread"), however, dm-raid is not considered at that time, and following test will hang: shell/integrity-caching.sh shell/lvconvert-raid-reshape.sh This patch set fix the broken test by patch 1-4; - patch 1 fix that step 4) is broken by suspended array; - patch 2 fix that step 4) is broken by read-only array; - patch 3 fix that step 3) is broken that md_do_sync() doesn't set MD_RECOVERY_DONE; Noted that this patch will introdece new problem that data will be corrupted, which will be fixed in later patches. - patch 4 fix that setp 1) is broken that sync_thread is register and MD_RECOVERY_RUNNING is set directly, md/raid behaviour, not related to dm-raid; With patch 1-4, the above test won't hang anymore, however, the test will still fail and complain that ext4 is corrupted; Second regression related to frozen sync thread: Noted that for raid456, if reshape is interrupted, then call "pers->start_reshape" will corrupt data. And dm-raid rely on md_do_sync() doesn't set MD_RECOVERY_DONE so that new sync_thread won't be registered, and patch 3 just break this. - Patch 5-6 fix this problem by interrupting reshape and frozen sync_thread in dm_suspend(), then unfrozen and continue reshape in dm_resume(). It's verified that dm-raid tests won't complain that ext4 is corrupted anymore. - Patch 7 fix the problem that raid_message() call md_reap_sync_thread() directly, without holding 'reconfig_mutex'. Last regression related to dm-raid456 IO concurrent with reshape: For raid456, if reshape is still in progress, then IO across reshape position will wait for reshape to make progress. However, for dm-raid, in following cases reshape will never make progress hence IO will hang: 1) the array is read-only; 2) MD_RECOVERY_WAIT is set; 3) MD_RECOVERY_FROZEN is set; After commit c467e97f079f ("md/raid6: use valid sector values to determine if an I/O should wait on the reshape") fix the problem that IO across reshape position doesn't wait for reshape, the dm-raid test shell/lvconvert-raid-reshape.sh start to hang at raid5_make_request(). For md/raid, the problem doesn't exist because: 1) If array is read-only, it can switch to read-write by ioctl/sysfs; 2) md/raid never set MD_RECOVERY_WAIT; 3) If MD_RECOVERY_FROZEN is set, mddev_suspend() doesn't hold 'reconfig_mutex' anymore, it can be cleared and reshape can continue by sysfs api 'sync_action'. However, I'm not sure yet how to avoid the problem in dm-raid yet. - patch 9-11 fix this problem by detecting the above 3 cases in dm_suspend(), and fail those IO directly. If user really meet the IO error, then it means they're reading the wrong data before c467e97f079f. And it's safe to read/write the array after reshape make progress successfully. There are also some other minor changes: patch 8 and patch 12; Test result: I apply this patchset on top of v6.8-rc1, and run lvm2 tests suite with folling cmd for 24 round(for about 2 days): for t in `ls test/shell`; do if cat test/shell/$t | grep raid &> /dev/null; then make check T=shell/$t fi done failed count failed test 1 ### failed: [ndev-vanilla] shell/dmsecuretest.sh 1 ### failed: [ndev-vanilla] shell/dmsetup-integrity-keys.sh 1 ### failed: [ndev-vanilla] shell/dmsetup-keyring.sh 5 ### failed: [ndev-vanilla] shell/duplicate-pvs-md0.sh 1 ### failed: [ndev-vanilla] shell/duplicate-vgid.sh 2 ### failed: [ndev-vanilla] shell/duplicate-vgnames.sh 1 ### failed: [ndev-vanilla] shell/fsadm-crypt.sh 1 ### failed: [ndev-vanilla] shell/integrity.sh 6 ### failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh 2 ### failed: [ndev-vanilla] shell/lvchange-rebuild-raid.sh 5 ### failed: [ndev-vanilla] shell/lvconvert-raid-reshape-stripes-load-reload.sh 4 ### failed: [ndev-vanilla] shell/lvconvert-raid-restripe-linear.sh 1 ### failed: [ndev-vanilla] shell/lvconvert-raid1-split-trackchanges.sh 20 ### failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh 20 ### failed: [ndev-vanilla] shell/lvcreate-large-raid.sh 24 ### failed: [ndev-vanilla] shell/lvextend-raid.sh And I ramdomly pick some tests verified by hand that these test will fail in v6.6 as well(not all tests, I don't have the time do do this yet): shell/lvextend-raid.sh shell/lvcreate-large-raid.sh shell/lvconvert-repair-raid.sh shell/lvchange-rebuild-raid.sh shell/lvchange-raid1-writemostly.sh Yu Kuai (14): md: don't ignore suspended array in md_check_recovery() md: don't ignore read-only array in md_check_recovery() md: make sure md_do_sync() will set MD_RECOVERY_DONE md: don't register sync_thread for reshape directly md: export helpers to stop sync_thread dm-raid: really frozen sync_thread during suspend md/dm-raid: don't call md_reap_sync_thread() directly dm-raid: add a new helper prepare_suspend() in md_personality md: export helper md_is_rdwr() md: don't suspend the array for interrupted reshape md/raid456: fix a deadlock for dm-raid456 while io concurrent with reshape dm-raid: fix lockdep waring in "pers->hot_add_disk" dm: wait for IO completion before removing dm device dm-raid: remove mddev_suspend/resume() drivers/md/dm-raid.c | 78 +++++++++++++++++++--------- drivers/md/dm.c | 3 ++ drivers/md/md.c | 120 +++++++++++++++++++++++++++++-------------- drivers/md/md.h | 16 ++++++ drivers/md/raid10.c | 16 +----- drivers/md/raid5.c | 61 ++++++++++++---------- 6 files changed, 190 insertions(+), 104 deletions(-)