Message ID | 20230626080913.3493135-1-linan666@huaweicloud.com (mailing list archive) |
---|---|
Headers | show |
Series | block/badblocks: fix badblocks setting error | expand |
Friendly ping. 在 2023/6/26 16:09, linan666@huaweicloud.com 写道: > From: Li Nan <linan122@huawei.com> > > This patch series fixes some simple bugs of setting badblocks and > optimizing struct badblocks. Coly Li has been trying to refactor badblocks > in patch series "badblocks improvement for multiple bad block ranges" > (https://lore.kernel.org/all/20220721121152.4180-1-colyli@suse.de). but the > workload is significant. Before that, I will fix some easily triggered > issues and optimize some code that does not conflict with Coly's changes. > > Changes in v4: > - patch 1, remove the part of reorder fields > - patch 3/4, improve commit log. > > Changes in v3: > - delete patchs with significant changes. > > Li Nan (4): > block/badblocks: change some members of badblocks to bool > block/badblocks: only set bb->changed/unacked_exist when badblocks > changes > block/badblocks: fix badblocks loss when badblocks combine > block/badblocks: fix the bug of reverse order > > include/linux/badblocks.h | 9 +++++---- > block/badblocks.c | 38 ++++++++++++++++++++++---------------- > 2 files changed, 27 insertions(+), 20 deletions(-) >
在 2023/6/26 16:09, linan666@huaweicloud.com 写道: > From: Li Nan <linan122@huawei.com> > > This patch series fixes some simple bugs of setting badblocks and > optimizing struct badblocks. Coly Li has been trying to refactor badblocks > in patch series "badblocks improvement for multiple bad block ranges" > (https://lore.kernel.org/all/20220721121152.4180-1-colyli@suse.de). but the Coly has sent the lastest version of his patch series. Now this patch series can be discarded. > workload is significant. Before that, I will fix some easily triggered > issues and optimize some code that does not conflict with Coly's changes. > > Changes in v4: > - patch 1, remove the part of reorder fields > - patch 3/4, improve commit log. > > Changes in v3: > - delete patchs with significant changes. > > Li Nan (4): > block/badblocks: change some members of badblocks to bool > block/badblocks: only set bb->changed/unacked_exist when badblocks > changes > block/badblocks: fix badblocks loss when badblocks combine > block/badblocks: fix the bug of reverse order > > include/linux/badblocks.h | 9 +++++---- > block/badblocks.c | 38 ++++++++++++++++++++++---------------- > 2 files changed, 27 insertions(+), 20 deletions(-) >
From: Li Nan <linan122@huawei.com> This patch series fixes some simple bugs of setting badblocks and optimizing struct badblocks. Coly Li has been trying to refactor badblocks in patch series "badblocks improvement for multiple bad block ranges" (https://lore.kernel.org/all/20220721121152.4180-1-colyli@suse.de). but the workload is significant. Before that, I will fix some easily triggered issues and optimize some code that does not conflict with Coly's changes. Changes in v4: - patch 1, remove the part of reorder fields - patch 3/4, improve commit log. Changes in v3: - delete patchs with significant changes. Li Nan (4): block/badblocks: change some members of badblocks to bool block/badblocks: only set bb->changed/unacked_exist when badblocks changes block/badblocks: fix badblocks loss when badblocks combine block/badblocks: fix the bug of reverse order include/linux/badblocks.h | 9 +++++---- block/badblocks.c | 38 ++++++++++++++++++++++---------------- 2 files changed, 27 insertions(+), 20 deletions(-)