Message ID | 20230928061543.1845742-1-yukuai1@huaweicloud.com (mailing list archive) |
---|---|
Headers | show |
Series | md: synchronize io with array reconfiguration | expand |
Hi Kuai, Thanks for the patchset! A few high level questions/suggestions: 1. This is a big change that needs a lot of explanation. While you managed to keep each patch relatively small (great job btw), it is not very clear why we need these changes. Specifically, we are adding a new mutex, it is worth mentioning why we cannot achieve the same goal without it. Please add more information in the cover letter. We will put part of the cover letter in the merge commit. 2. In the cover letter, please also highlight that we are removing MD_ALLOW_SB_UPDATE and MD_UPDATING_SB. This is a big improvement. 3. Please rearrange the patch set so that the two "READ_ONCE/WRITE_ONCE" patches are at the beginning. 4. Please consider merging some patches. Current "add-api => use-api => remove-old-api" makes it tricky to follow what is being changed. For this set, I found the diff of the whole set easier to follow than some of the big patches. Thanks again for your hard work into this! Song On Wed, Sep 27, 2023 at 11:22 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: > > From: Yu Kuai <yukuai3@huawei.com> [...] -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
Hi, 在 2023/09/29 3:15, Song Liu 写道: > Hi Kuai, > > Thanks for the patchset! > > A few high level questions/suggestions: Thanks a lot for these! > > 1. This is a big change that needs a lot of explanation. While you managed to > keep each patch relatively small (great job btw), it is not very clear why we > need these changes. Specifically, we are adding a new mutex, it is worth > mentioning why we cannot achieve the same goal without it. Please add > more information in the cover letter. We will put part of the cover letter in > the merge commit. Yeah, I realize that I explain too little. I will add background and design. > > 2. In the cover letter, please also highlight that we are removing > MD_ALLOW_SB_UPDATE and MD_UPDATING_SB. This is a big improvement. > Okay. > 3. Please rearrange the patch set so that the two "READ_ONCE/WRITE_ONCE" > patches are at the beginning. Okay. > > 4. Please consider merging some patches. Current "add-api => use-api => > remove-old-api" makes it tricky to follow what is being changed. For this set, > I found the diff of the whole set easier to follow than some of the big patches. I refer to some other big patchset to replace an old api, for example: https://lore.kernel.org/all/20230818123232.2269-1-jack@suse.cz/ Currently I prefer to use one patch for each function point. And I do merged some patches in this version, and for remaining patches, do you prefer to use one patch for one file instead of one function point?(For example, merge patch 10-12 for md/raid5-cache, and 13-16 for md/raid5). Thanks, Kuai > > Thanks again for your hard work into this! > Song > > On Wed, Sep 27, 2023 at 11:22 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: >> >> From: Yu Kuai <yukuai3@huawei.com> > [...] > . > -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
On Wed, Oct 4, 2023 at 8:42 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: > > Hi, > > 在 2023/09/29 3:15, Song Liu 写道: > > Hi Kuai, > > > > Thanks for the patchset! > > > > A few high level questions/suggestions: > > Thanks a lot for these! > > > > 1. This is a big change that needs a lot of explanation. While you managed to > > keep each patch relatively small (great job btw), it is not very clear why we > > need these changes. Specifically, we are adding a new mutex, it is worth > > mentioning why we cannot achieve the same goal without it. Please add > > more information in the cover letter. We will put part of the cover letter in > > the merge commit. > > Yeah, I realize that I explain too little. I will add background and > design. > > > > 2. In the cover letter, please also highlight that we are removing > > MD_ALLOW_SB_UPDATE and MD_UPDATING_SB. This is a big improvement. > > > > Okay. > > 3. Please rearrange the patch set so that the two "READ_ONCE/WRITE_ONCE" > > patches are at the beginning. > > Okay. > > > > 4. Please consider merging some patches. Current "add-api => use-api => > > remove-old-api" makes it tricky to follow what is being changed. For this set, > > I found the diff of the whole set easier to follow than some of the big patches. > I refer to some other big patchset to replace an old api, for example: > > https://lore.kernel.org/all/20230818123232.2269-1-jack@suse.cz/ Yes, this is a safe way to replace old APIs. Since the scale of this patchset is smaller, I was thinking it might not be necessary to go that path. But I will let you make the decision. > Currently I prefer to use one patch for each function point. And I do > merged some patches in this version, and for remaining patches, do you > prefer to use one patch for one file instead of one function point?(For > example, merge patch 10-12 for md/raid5-cache, and 13-16 for md/raid5). I think 10 should be a separate patch, and we can merge 11 and 12. We can merge 13-16, and maybe also 5-7 and 18-20. Thanks, Song -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
Hi, 在 2023/10/05 11:55, Song Liu 写道: > On Wed, Oct 4, 2023 at 8:42 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: >> >> Hi, >> >> 在 2023/09/29 3:15, Song Liu 写道: >>> Hi Kuai, >>> >>> Thanks for the patchset! >>> >>> A few high level questions/suggestions: >> >> Thanks a lot for these! >>> >>> 1. This is a big change that needs a lot of explanation. While you managed to >>> keep each patch relatively small (great job btw), it is not very clear why we >>> need these changes. Specifically, we are adding a new mutex, it is worth >>> mentioning why we cannot achieve the same goal without it. Please add >>> more information in the cover letter. We will put part of the cover letter in >>> the merge commit. >> >> Yeah, I realize that I explain too little. I will add background and >> design. >>> Can you take a look about this new cover letter? ##### Backgroud Our testers started to test raid10 last year, and we found that there are lots of problem in the following test scenario: - add or remove disks to the array - issue io to the array At first, we fixed each problem independently respect that io can concurrent with array reconfiguration. However, on the one hand new issues are continuously reported, on the other hand other personalities might have the same problems. I'm thinking about how to fix these problems thoroughly. Refer to how block layer protect io with queue reconfiguration(for example, change elevator): ``` blk_mq_freeze_queue -> wait for all io to be done, and prevent new io to be dispatched // reconfiguration blk_mq_unfreeze_queue ``` Then it comes to my mind that I can do something similar to synchronize io with array reconfiguration. ##### rcu introduction see details in https://www.kernel.org/doc/html/next/RCU/whatisRCU.html - writer should replace old data with new data first, and free old data after grace period; - reader should handle both cases that old data and new data is read, and the data that is read should not be dereferenced after critical section; ##### Current synchronization Add or remove disks to the array can be triggered by ioctl/sysfs/daemon thread: 1. hold 'reconfig_mutex'; 2. check that rdev can be added/removed, one condition is that there is no IO, for example: ``` raid10_remove_disk if (atomic_read(&rdev->nr_pending)) err = -EBUSY; ``` 3. do the actual operations to add/remove a rdev, one procedure is set/clear a pointer to rdev, for example: ``` raid10_remove_disk p = conf->mirrors[xx] rdevp = &p->rdev/replacement *rdevp = NULL ``` 4. check if there is still no io on this rdev, if not, revert the pointer to rdev and return failure, for example ``` raid10_remove_disk synchronize_rcu() if (atomic_read(&rdev->nr_pending)) err = -EBUSY *rdevp = rdev ``` IO path is using rcu_read_lock/unlock() to access rdev, for example: ``` raid10_write_request rcu_read_lock rdev = rcu_dereference(mirror->rdev/replacement) rcu_read_unlock raid10_end_write_request rdev = conf->mirrors[dev].rdev/replacement -> rdev/rrdev is still used after rcu_read_unlock() ``` ##### Current problems - rcu is used wrongly; - There are lots of places involved that old value is read, however, many places doesn't handle this correctly; - Between step 3 and 4, if new io is dispatched, NULL will be read for the rdev, and data will be lost. ##### New synchronization Similar to how blk_mq_freeze_queue() works Add or remove disks: 1. suspend the array, this should guarantee no new io is dispatched and wait for dispatched io to be done; 2. add or remove rdevs from array; 3. resume the array; IO path doesn't need to change for now, and all rcu implementation can be removed. There are already apis to suspend/resume the array, unfortunately, they can't be used here because: - old apis only wait for io to be dispatched, not to be done; - old apis is only supported for the personality that implement quiesce callback; - old apis must be called after the array start running; - old apis must hold 'reconfig_mutex', and will wait for io to be done, this behavior is risky because 'reconfig_mutex' is used for daemon thread to update super_block and handle io. In order to prevent potential problems, there is a weird logical that suspend array hold 'reconfig_mutex' for mddev_check_recovery() to update super_block; Then main work is divided into 3 steps, at first make sure new apis to suspend the array is general: - make sure suspend array will wait for io to be done(Done by []); - make sure suspend array can be called for all personalities(Done by []); - make sure suspend array can be called at any time(Done by []); - make sure suspend array doesn't rely on 'reconfig_mutex'; The second step is to replace old apis with new apis: ``` From: lock reconfig_mutex suspend array resume array unlock reconfig_mutex To: suspend array lock reconfig_mutex unlock reconfig_mutex resume array ``` Finally, for the remain path that involved reconfiguration, suspend the array first: ``` From: // reconfiguration To: suspend array // reconfiguration resume array ``` >>> 2. In the cover letter, please also highlight that we are removing >>> MD_ALLOW_SB_UPDATE and MD_UPDATING_SB. This is a big improvement. >>> >> >> Okay. >>> 3. Please rearrange the patch set so that the two "READ_ONCE/WRITE_ONCE" >>> patches are at the beginning. >> >> Okay. >>> >>> 4. Please consider merging some patches. Current "add-api => use-api => >>> remove-old-api" makes it tricky to follow what is being changed. For this set, >>> I found the diff of the whole set easier to follow than some of the big patches. >> I refer to some other big patchset to replace an old api, for example: >> >> https://lore.kernel.org/all/20230818123232.2269-1-jack@suse.cz/ > > Yes, this is a safe way to replace old APIs. Since the scale of this > patchset is > smaller, I was thinking it might not be necessary to go that path. But > I will let > you make the decision. > >> Currently I prefer to use one patch for each function point. And I do >> merged some patches in this version, and for remaining patches, do you >> prefer to use one patch for one file instead of one function point?(For >> example, merge patch 10-12 for md/raid5-cache, and 13-16 for md/raid5). > > I think 10 should be a separate patch, and we can merge 11 and 12. We can > merge 13-16, and maybe also 5-7 and 18-20. > > Thanks, > Song > . > -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
On Fri, Oct 6, 2023 at 7:32 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: > > Hi, > > 在 2023/10/05 11:55, Song Liu 写道: > > On Wed, Oct 4, 2023 at 8:42 PM Yu Kuai <yukuai1@huaweicloud.com> wrote: > >> > >> Hi, > >> > >> 在 2023/09/29 3:15, Song Liu 写道: > >>> Hi Kuai, > >>> > >>> Thanks for the patchset! > >>> > >>> A few high level questions/suggestions: > >> > >> Thanks a lot for these! > >>> > >>> 1. This is a big change that needs a lot of explanation. While you managed to > >>> keep each patch relatively small (great job btw), it is not very clear why we > >>> need these changes. Specifically, we are adding a new mutex, it is worth > >>> mentioning why we cannot achieve the same goal without it. Please add > >>> more information in the cover letter. We will put part of the cover letter in > >>> the merge commit. > >> > >> Yeah, I realize that I explain too little. I will add background and > >> design. > >>> > Can you take a look about this new cover letter? I don't have time right now to look into all the details, but it looks great at first glance. We can still edit it a little bit when applying the patchset, but that may not be necessary. Thanks, Song > > ##### Backgroud > > Our testers started to test raid10 last year, and we found that there > are lots of problem in the following test scenario: > > - add or remove disks to the array > - issue io to the array -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
Hi, 在 2023/10/07 10:40, Song Liu 写道: >> Can you take a look about this new cover letter? > > I don't have time right now to look into all the details, but it looks > great at first glance. We can still edit it a little bit when applying the > patchset, but that may not be necessary. Yeah, it's not urgent so you can take it slow, I just want to make sure that you're good with it. I'll edit this cover letter a bit and send v4 soon. Thanks, Kuai -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
From: Yu Kuai <yukuai3@huawei.com> Changes in v3: - rebase with latest md-next; - remove patch 2 from v2, and replace it with a new patch; - fix a null-ptr-derefrence in rdev_attr_store() that mddev is used before checking; - merge patch 20-22 from v1 into one patch; - mddev_lock() used to be called first and can be interruptted, allow new api, which is called before mddev_lock() now, to be interruptted as well; - improve some comments and coding; Changes in v2: - rebase with latest md-next; - remove some follow up cleanup patches, these patches will be sent later after this patchset. After previous four patchset of preparatory work, this patchset impelement a new version of mddev_suspend(), the new apis: - reconfig_mutex is not required; - the weird logical that suspend array hold 'reconfig_mutex' for mddev_check_recovery() to update superblock is not needed; - the special handling, 'pers->prepare_suspend', for raid456 is not needed; - It's safe to be called at any time once mddev is allocated, and it's designed to be used from slow path where array configuration is changed; And use the new api to replace: mddev_lock mddev_suspend or not // array reconfiguration mddev_resume or not mddev_unlock With: mddev_suspend mddev_lock // array reconfiguration mddev_unlock mddev_resume However, the above change is not possible for raid5 and raid-cluster in some corner cases, and mddev_suspend/resume() is replaced with quiesce() callback, which will suspend the array as well. This patchset is tested in my VM with mdadm testsuite with loop device except for 10ddf tests(they always fail before this patchset). A lot of cleanups will be started after this patchset. Yu Kuai (25): md: use READ_ONCE/WRITE_ONCE for 'suspend_lo' and 'suspend_hi' md: replace is_md_suspended() with 'mddev->suspended' in md_check_recovery() md: add new helpers to suspend/resume array md: add new helpers to suspend/resume and lock/unlock array md: use new apis to suspend array for suspend_lo/hi_store() md: use new apis to suspend array for level_store() md: use new apis to suspend array for serialize_policy_store() md/dm-raid: use new apis to suspend array md/md-bitmap: use new apis to suspend array for location_store() md/raid5-cache: use READ_ONCE/WRITE_ONCE for 'conf->log' md/raid5-cache: use new apis to suspend array for r5c_disable_writeback_async() md/raid5-cache: use new apis to suspend array for r5c_journal_mode_store() md/raid5: use new apis to suspend array for raid5_store_stripe_size() md/raid5: use new apis to suspend array for raid5_store_skip_copy() md/raid5: use new apis to suspend array for raid5_store_group_thread_cnt() md/raid5: use new apis to suspend array for raid5_change_consistency_policy() md/raid5: replace suspend with quiesce() callback md: use new apis to suspend array for ioctls involed array reconfiguration md: use new apis to suspend array for adding/removing rdev from state_store() md: use new apis to suspend array before mddev_create/destroy_serial_pool md: cleanup mddev_create/destroy_serial_pool() md/md-linear: cleanup linear_add() md: suspend array in md_start_sync() if array need reconfiguration md: remove old apis to suspend the array md: rename __mddev_suspend/resume() back to mddev_suspend/resume() drivers/md/dm-raid.c | 10 +- drivers/md/md-autodetect.c | 4 +- drivers/md/md-bitmap.c | 18 ++- drivers/md/md-linear.c | 2 - drivers/md/md.c | 233 ++++++++++++++++++++----------------- drivers/md/md.h | 43 +++++-- drivers/md/raid5-cache.c | 64 +++++----- drivers/md/raid5.c | 56 ++++----- 8 files changed, 226 insertions(+), 204 deletions(-)