Message ID | 1455318392-26765-3-git-send-email-jsnow@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, 02/12 18:06, John Snow wrote: > During incremental backups, if the target has a cluster size that is > larger than the backup cluster size and we are backing up to a target > that cannot (for whichever reason) pull clusters up from a backing image, > we may inadvertantly create unusable incremental backup images. > > For example: > > If the bitmap tracks changes at a 64KB granularity and we transmit 64KB > of data at a time but the target uses a 128KB cluster size, it is > possible that only half of a target cluster will be recognized as dirty > by the backup block job. When the cluster is allocated on the target > image but only half populated with data, we lose the ability to > distinguish between zero padding and uninitialized data. > > This does not happen if the target image has a backing file that points > to the last known good backup. > > Even if we have a backing file, though, it's likely going to be faster > to just buffer the redundant data ourselves from the live image than > fetching it from the backing file, so let's just always round up to the > target granularity. > > Signed-off-by: John Snow <jsnow@redhat.com> > --- > block/backup.c | 13 ++++++++++--- > 1 file changed, 10 insertions(+), 3 deletions(-) > > diff --git a/block/backup.c b/block/backup.c > index fcf0043..62faf81 100644 > --- a/block/backup.c > +++ b/block/backup.c > @@ -568,9 +568,16 @@ void backup_start(BlockDriverState *bs, BlockDriverState *target, > job->on_target_error = on_target_error; > job->target = target; > job->sync_mode = sync_mode; > - job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ? > - sync_bitmap : NULL; > - job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT; > + if (sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) { > + BlockDriverInfo bdi; > + > + bdrv_get_info(job->target, &bdi); > + job->sync_bitmap = sync_bitmap; > + job->cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, > + bdi.cluster_size); Why not just do it for all sync modes? Fam > + } else { > + job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT; > + } > job->sectors_per_cluster = job->cluster_size / BDRV_SECTOR_SIZE; > job->common.len = len; > job->common.co = qemu_coroutine_create(backup_run); > -- > 2.4.3 >
On 02/14/2016 01:49 AM, Fam Zheng wrote: > On Fri, 02/12 18:06, John Snow wrote: >> During incremental backups, if the target has a cluster size that is >> larger than the backup cluster size and we are backing up to a target >> that cannot (for whichever reason) pull clusters up from a backing image, >> we may inadvertantly create unusable incremental backup images. >> >> For example: >> >> If the bitmap tracks changes at a 64KB granularity and we transmit 64KB >> of data at a time but the target uses a 128KB cluster size, it is >> possible that only half of a target cluster will be recognized as dirty >> by the backup block job. When the cluster is allocated on the target >> image but only half populated with data, we lose the ability to >> distinguish between zero padding and uninitialized data. >> >> This does not happen if the target image has a backing file that points >> to the last known good backup. >> >> Even if we have a backing file, though, it's likely going to be faster >> to just buffer the redundant data ourselves from the live image than >> fetching it from the backing file, so let's just always round up to the >> target granularity. >> >> Signed-off-by: John Snow <jsnow@redhat.com> >> --- >> block/backup.c | 13 ++++++++++--- >> 1 file changed, 10 insertions(+), 3 deletions(-) >> >> diff --git a/block/backup.c b/block/backup.c >> index fcf0043..62faf81 100644 >> --- a/block/backup.c >> +++ b/block/backup.c >> @@ -568,9 +568,16 @@ void backup_start(BlockDriverState *bs, BlockDriverState *target, >> job->on_target_error = on_target_error; >> job->target = target; >> job->sync_mode = sync_mode; >> - job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ? >> - sync_bitmap : NULL; >> - job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT; >> + if (sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) { >> + BlockDriverInfo bdi; >> + >> + bdrv_get_info(job->target, &bdi); >> + job->sync_bitmap = sync_bitmap; >> + job->cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, >> + bdi.cluster_size); > > Why not just do it for all sync modes? > > Fam > Caught me not thinking about those. sync=full is probably OK as-is, but top and none suffer from a similar problem, you're right. Incremental is the worst offender since the bitmap used to create the bitmap will have been consumed, but I'll pay heed to the other modes in v2. >> + } else { >> + job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT; >> + } >> job->sectors_per_cluster = job->cluster_size / BDRV_SECTOR_SIZE; >> job->common.len = len; >> job->common.co = qemu_coroutine_create(backup_run); >> -- >> 2.4.3 >>
diff --git a/block/backup.c b/block/backup.c index fcf0043..62faf81 100644 --- a/block/backup.c +++ b/block/backup.c @@ -568,9 +568,16 @@ void backup_start(BlockDriverState *bs, BlockDriverState *target, job->on_target_error = on_target_error; job->target = target; job->sync_mode = sync_mode; - job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ? - sync_bitmap : NULL; - job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT; + if (sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) { + BlockDriverInfo bdi; + + bdrv_get_info(job->target, &bdi); + job->sync_bitmap = sync_bitmap; + job->cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, + bdi.cluster_size); + } else { + job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT; + } job->sectors_per_cluster = job->cluster_size / BDRV_SECTOR_SIZE; job->common.len = len; job->common.co = qemu_coroutine_create(backup_run);
During incremental backups, if the target has a cluster size that is larger than the backup cluster size and we are backing up to a target that cannot (for whichever reason) pull clusters up from a backing image, we may inadvertantly create unusable incremental backup images. For example: If the bitmap tracks changes at a 64KB granularity and we transmit 64KB of data at a time but the target uses a 128KB cluster size, it is possible that only half of a target cluster will be recognized as dirty by the backup block job. When the cluster is allocated on the target image but only half populated with data, we lose the ability to distinguish between zero padding and uninitialized data. This does not happen if the target image has a backing file that points to the last known good backup. Even if we have a backing file, though, it's likely going to be faster to just buffer the redundant data ourselves from the live image than fetching it from the backing file, so let's just always round up to the target granularity. Signed-off-by: John Snow <jsnow@redhat.com> --- block/backup.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-)