Message ID | 20230602124747.1544077-1-jean-louis@dupond.be (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] qcow2: add discard-no-unref option | expand |
On 02.06.23 14:47, Jean-Louis Dupond wrote: > When we for example have a sparse qcow2 image and discard: unmap is enabled, > there can be a lot of fragmentation in the image after some time. Especially on VM's > that do a lot of writes/deletes. > This causes the qcow2 image to grow even over 110% of its virtual size, > because the free gaps in the image get too small to allocate new > continuous clusters. So it allocates new space at the end of the image. > > Disabling discard is not an option, as discard is needed to keep the > incremental backup size as low as possible. Without discard, the > incremental backups would become large, as qemu thinks it's just dirty > blocks but it doesn't know the blocks are unneeded. > So we need to avoid fragmentation but also 'empty' the unneeded blocks in > the image to have a small incremental backup. > > In addition, we also want to send the discards further down the stack, so > the underlying blocks are still discarded. > > Therefor we introduce a new qcow2 option "discard-no-unref". > When setting this option to true, discards will no longer have the qcow2 > driver relinquish cluster allocations. Other than that, the request is > handled as normal: All clusters in range are marked as zero, and, if > pass-discard-request is true, it is passed further down the stack. > The only difference is that the now-zero clusters are preallocated > instead of being unallocated. > This will avoid fragmentation on the qcow2 image. > > Fixes: https://gitlab.com/qemu-project/qemu/-/issues/1621 > Signed-off-by: Jean-Louis Dupond <jean-louis@dupond.be> > --- > block/qcow2-cluster.c | 32 ++++++++++++++++++++++++++++---- > block/qcow2.c | 18 ++++++++++++++++++ > block/qcow2.h | 3 +++ > qapi/block-core.json | 12 ++++++++++++ > qemu-options.hx | 12 ++++++++++++ > 5 files changed, 73 insertions(+), 4 deletions(-) > > diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c > index 39cda7f907..1f130c6ab9 100644 > --- a/block/qcow2-cluster.c > +++ b/block/qcow2-cluster.c > @@ -1894,6 +1894,7 @@ again: > return 0; > } > > + > /* > * This discards as many clusters of nb_clusters as possible at once (i.e. > * all clusters in the same L2 slice) and returns the number of discarded Was adding this empty line intentional? (If not, I’d drop it.) > @@ -1925,6 +1926,9 @@ static int discard_in_l2_slice(BlockDriverState *bs, uint64_t offset, > uint64_t new_l2_bitmap = old_l2_bitmap; > QCow2ClusterType cluster_type = > qcow2_get_cluster_type(bs, old_l2_entry); > + bool keep_reference = (cluster_type != QCOW2_CLUSTER_COMPRESSED) && > + (s->discard_no_unref && > + type == QCOW2_DISCARD_REQUEST); (Sorry I didn’t notice before :/) I think there’s a condition missing here, namely `full_discard` (i.e. `&& !full_discard`). We must set `keep_reference` only if we will actually keep the reference, which won’t happen when `full_discard` is set. (Same could be said for s->qcow_version < 3, but in that case, `s->discard_no_unref` can’t be true.) (Not a problem in practice because `type == QCOW2_DISCARD_REQUEST` never happens together with `full_discard`, but better be safe than sorry.) Alternatively... > /* > * If full_discard is true, the cluster should not read back as zeroes, [...] > @@ -1960,8 +1976,16 @@ static int discard_in_l2_slice(BlockDriverState *bs, uint64_t offset, > if (has_subclusters(s)) { > set_l2_bitmap(s, l2_slice, l2_index + i, new_l2_bitmap); > } > - /* Then decrease the refcount */ > - qcow2_free_any_cluster(bs, old_l2_entry, type); > + if (!keep_reference) { ...we could explicitly check here whether the new L2 entry is still allocated or not, like ``` QCow2ClusterType new_cluster_type = qcow2_get_cluster_type(bs, new_l2_entry); if (!qcow2_cluster_is_allocated(new_cluster_type)) { /* Decrease the refcount if the cluster has been deallocated */ qcow2_free_any_cluster(...); } else if (s->discard_passthrough[type] && qcow2_cluster_is_allocated(cluster_type)) { /* If we keep the reference, pass on the discard still */ /* Discard must always produce zero-reading clusters */ assert(new_cluster_type == QCOW2_CLUSTER_ZERO_ALLOC); /* Compressed clusters will never remain allocated */ assert(cluster_type != QCOW2_CLUSTER_COMPRESSED); bdrv_pdiscard(...); } ``` Just an idea, though, I understand if you’d rather not modify the patch further. > + /* Then decrease the refcount */ > + qcow2_free_any_cluster(bs, old_l2_entry, type); > + } else if (s->discard_passthrough[type] && > + (cluster_type == QCOW2_CLUSTER_NORMAL || > + cluster_type == QCOW2_CLUSTER_ZERO_ALLOC)) { > + /* If we keep the reference, pass on the discard still */ > + bdrv_pdiscard(s->data_file, new_l2_entry & L2E_OFFSET_MASK, > + s->cluster_size); I mentioned this briefly on IRC, might have gone under the radar; I think using `old_l2_entry` is better than `new_l2_entry`. In practice, there shouldn’t be a difference, but I think it’s slightly cleaner to free based on the old entry than have this be based on the new one. (Also, in case we did mess up, like in the hypothetical case above where `keep_reference` is true while `full_discard` is also true, using `old_l2_entry` means we’ll just accidentally discard the old cluster (the accident is merely to discard the cluster instead of decrementing its refcount), instead of discarding a completely wrong cluster (the image header, with `new_l2_entry = 0`).) Rest looks good to me! Hanna
On 2/06/2023 17:28, Hanna Czenczek wrote: > On 02.06.23 14:47, Jean-Louis Dupond wrote: >> When we for example have a sparse qcow2 image and discard: unmap is >> enabled, >> there can be a lot of fragmentation in the image after some time. >> Especially on VM's >> that do a lot of writes/deletes. >> This causes the qcow2 image to grow even over 110% of its virtual size, >> because the free gaps in the image get too small to allocate new >> continuous clusters. So it allocates new space at the end of the image. >> >> Disabling discard is not an option, as discard is needed to keep the >> incremental backup size as low as possible. Without discard, the >> incremental backups would become large, as qemu thinks it's just dirty >> blocks but it doesn't know the blocks are unneeded. >> So we need to avoid fragmentation but also 'empty' the unneeded >> blocks in >> the image to have a small incremental backup. >> >> In addition, we also want to send the discards further down the >> stack, so >> the underlying blocks are still discarded. >> >> Therefor we introduce a new qcow2 option "discard-no-unref". >> When setting this option to true, discards will no longer have the qcow2 >> driver relinquish cluster allocations. Other than that, the request is >> handled as normal: All clusters in range are marked as zero, and, if >> pass-discard-request is true, it is passed further down the stack. >> The only difference is that the now-zero clusters are preallocated >> instead of being unallocated. >> This will avoid fragmentation on the qcow2 image. >> >> Fixes: https://gitlab.com/qemu-project/qemu/-/issues/1621 >> Signed-off-by: Jean-Louis Dupond <jean-louis@dupond.be> >> --- >> block/qcow2-cluster.c | 32 ++++++++++++++++++++++++++++---- >> block/qcow2.c | 18 ++++++++++++++++++ >> block/qcow2.h | 3 +++ >> qapi/block-core.json | 12 ++++++++++++ >> qemu-options.hx | 12 ++++++++++++ >> 5 files changed, 73 insertions(+), 4 deletions(-) >> >> diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c >> index 39cda7f907..1f130c6ab9 100644 >> --- a/block/qcow2-cluster.c >> +++ b/block/qcow2-cluster.c >> @@ -1894,6 +1894,7 @@ again: >> return 0; >> } >> + >> /* >> * This discards as many clusters of nb_clusters as possible at >> once (i.e. >> * all clusters in the same L2 slice) and returns the number of >> discarded > > Was adding this empty line intentional? (If not, I’d drop it.) Dropped > >> @@ -1925,6 +1926,9 @@ static int discard_in_l2_slice(BlockDriverState >> *bs, uint64_t offset, >> uint64_t new_l2_bitmap = old_l2_bitmap; >> QCow2ClusterType cluster_type = >> qcow2_get_cluster_type(bs, old_l2_entry); >> + bool keep_reference = (cluster_type != >> QCOW2_CLUSTER_COMPRESSED) && >> + (s->discard_no_unref && >> + type == QCOW2_DISCARD_REQUEST); > > (Sorry I didn’t notice before :/) I think there’s a condition missing > here, namely `full_discard` (i.e. `&& !full_discard`). We must set > `keep_reference` only if we will actually keep the reference, which > won’t happen when `full_discard` is set. (Same could be said for > s->qcow_version < 3, but in that case, `s->discard_no_unref` can’t be > true.) > > (Not a problem in practice because `type == QCOW2_DISCARD_REQUEST` > never happens together with `full_discard`, but better be safe than > sorry.) Fixed! > > Alternatively... > >> /* >> * If full_discard is true, the cluster should not read >> back as zeroes, > > [...] > >> @@ -1960,8 +1976,16 @@ static int >> discard_in_l2_slice(BlockDriverState *bs, uint64_t offset, >> if (has_subclusters(s)) { >> set_l2_bitmap(s, l2_slice, l2_index + i, new_l2_bitmap); >> } >> - /* Then decrease the refcount */ >> - qcow2_free_any_cluster(bs, old_l2_entry, type); >> + if (!keep_reference) { > > ...we could explicitly check here whether the new L2 entry is still > allocated or not, like > > ``` > QCow2ClusterType new_cluster_type = > qcow2_get_cluster_type(bs, new_l2_entry); > > if (!qcow2_cluster_is_allocated(new_cluster_type)) { > /* Decrease the refcount if the cluster has been deallocated */ > qcow2_free_any_cluster(...); > } else if (s->discard_passthrough[type] && > qcow2_cluster_is_allocated(cluster_type)) { > /* If we keep the reference, pass on the discard still */ > > /* Discard must always produce zero-reading clusters */ > assert(new_cluster_type == QCOW2_CLUSTER_ZERO_ALLOC); > /* Compressed clusters will never remain allocated */ > assert(cluster_type != QCOW2_CLUSTER_COMPRESSED); > > bdrv_pdiscard(...); > } > ``` > > Just an idea, though, I understand if you’d rather not modify the > patch further. > >> + /* Then decrease the refcount */ >> + qcow2_free_any_cluster(bs, old_l2_entry, type); >> + } else if (s->discard_passthrough[type] && >> + (cluster_type == QCOW2_CLUSTER_NORMAL || >> + cluster_type == QCOW2_CLUSTER_ZERO_ALLOC)) { >> + /* If we keep the reference, pass on the discard still */ >> + bdrv_pdiscard(s->data_file, new_l2_entry & L2E_OFFSET_MASK, >> + s->cluster_size); > > I mentioned this briefly on IRC, might have gone under the radar; I > think using `old_l2_entry` is better than `new_l2_entry`. In > practice, there shouldn’t be a difference, but I think it’s slightly > cleaner to free based on the old entry than have this be based on the > new one. Was caused by undoing to much :) Fixed. > > (Also, in case we did mess up, like in the hypothetical case above > where `keep_reference` is true while `full_discard` is also true, > using `old_l2_entry` means we’ll just accidentally discard the old > cluster (the accident is merely to discard the cluster instead of > decrementing its refcount), instead of discarding a completely wrong > cluster (the image header, with `new_l2_entry = 0`).) > > Rest looks good to me! Posted v3 patch to the ML > > Hanna > Thanks Jean-Louis
diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c index 39cda7f907..1f130c6ab9 100644 --- a/block/qcow2-cluster.c +++ b/block/qcow2-cluster.c @@ -1894,6 +1894,7 @@ again: return 0; } + /* * This discards as many clusters of nb_clusters as possible at once (i.e. * all clusters in the same L2 slice) and returns the number of discarded @@ -1925,6 +1926,9 @@ static int discard_in_l2_slice(BlockDriverState *bs, uint64_t offset, uint64_t new_l2_bitmap = old_l2_bitmap; QCow2ClusterType cluster_type = qcow2_get_cluster_type(bs, old_l2_entry); + bool keep_reference = (cluster_type != QCOW2_CLUSTER_COMPRESSED) && + (s->discard_no_unref && + type == QCOW2_DISCARD_REQUEST); /* * If full_discard is true, the cluster should not read back as zeroes, @@ -1943,10 +1947,22 @@ static int discard_in_l2_slice(BlockDriverState *bs, uint64_t offset, new_l2_entry = new_l2_bitmap = 0; } else if (bs->backing || qcow2_cluster_is_allocated(cluster_type)) { if (has_subclusters(s)) { - new_l2_entry = 0; + if (keep_reference) { + new_l2_entry = old_l2_entry; + } else { + new_l2_entry = 0; + } new_l2_bitmap = QCOW_L2_BITMAP_ALL_ZEROES; } else { - new_l2_entry = s->qcow_version >= 3 ? QCOW_OFLAG_ZERO : 0; + if (s->qcow_version >= 3) { + if (keep_reference) { + new_l2_entry |= QCOW_OFLAG_ZERO; + } else { + new_l2_entry = QCOW_OFLAG_ZERO; + } + } else { + new_l2_entry = 0; + } } } @@ -1960,8 +1976,16 @@ static int discard_in_l2_slice(BlockDriverState *bs, uint64_t offset, if (has_subclusters(s)) { set_l2_bitmap(s, l2_slice, l2_index + i, new_l2_bitmap); } - /* Then decrease the refcount */ - qcow2_free_any_cluster(bs, old_l2_entry, type); + if (!keep_reference) { + /* Then decrease the refcount */ + qcow2_free_any_cluster(bs, old_l2_entry, type); + } else if (s->discard_passthrough[type] && + (cluster_type == QCOW2_CLUSTER_NORMAL || + cluster_type == QCOW2_CLUSTER_ZERO_ALLOC)) { + /* If we keep the reference, pass on the discard still */ + bdrv_pdiscard(s->data_file, new_l2_entry & L2E_OFFSET_MASK, + s->cluster_size); + } } qcow2_cache_put(s->l2_table_cache, (void **) &l2_slice); diff --git a/block/qcow2.c b/block/qcow2.c index 7f3948360d..e23edd48c2 100644 --- a/block/qcow2.c +++ b/block/qcow2.c @@ -682,6 +682,7 @@ static const char *const mutable_opts[] = { QCOW2_OPT_DISCARD_REQUEST, QCOW2_OPT_DISCARD_SNAPSHOT, QCOW2_OPT_DISCARD_OTHER, + QCOW2_OPT_DISCARD_NO_UNREF, QCOW2_OPT_OVERLAP, QCOW2_OPT_OVERLAP_TEMPLATE, QCOW2_OPT_OVERLAP_MAIN_HEADER, @@ -726,6 +727,11 @@ static QemuOptsList qcow2_runtime_opts = { .type = QEMU_OPT_BOOL, .help = "Generate discard requests when other clusters are freed", }, + { + .name = QCOW2_OPT_DISCARD_NO_UNREF, + .type = QEMU_OPT_BOOL, + .help = "Do not unreference discarded clusters", + }, { .name = QCOW2_OPT_OVERLAP, .type = QEMU_OPT_STRING, @@ -969,6 +975,7 @@ typedef struct Qcow2ReopenState { bool use_lazy_refcounts; int overlap_check; bool discard_passthrough[QCOW2_DISCARD_MAX]; + bool discard_no_unref; uint64_t cache_clean_interval; QCryptoBlockOpenOptions *crypto_opts; /* Disk encryption runtime options */ } Qcow2ReopenState; @@ -1140,6 +1147,15 @@ static int qcow2_update_options_prepare(BlockDriverState *bs, r->discard_passthrough[QCOW2_DISCARD_OTHER] = qemu_opt_get_bool(opts, QCOW2_OPT_DISCARD_OTHER, false); + r->discard_no_unref = qemu_opt_get_bool(opts, QCOW2_OPT_DISCARD_NO_UNREF, + false); + if (r->discard_no_unref && s->qcow_version < 3) { + error_setg(errp, + "discard-no-unref is only supported since qcow2 version 3"); + ret = -EINVAL; + goto fail; + } + switch (s->crypt_method_header) { case QCOW_CRYPT_NONE: if (encryptfmt) { @@ -1220,6 +1236,8 @@ static void qcow2_update_options_commit(BlockDriverState *bs, s->discard_passthrough[i] = r->discard_passthrough[i]; } + s->discard_no_unref = r->discard_no_unref; + if (s->cache_clean_interval != r->cache_clean_interval) { cache_clean_timer_del(bs); s->cache_clean_interval = r->cache_clean_interval; diff --git a/block/qcow2.h b/block/qcow2.h index 4f67eb912a..ea9adb5706 100644 --- a/block/qcow2.h +++ b/block/qcow2.h @@ -133,6 +133,7 @@ #define QCOW2_OPT_DISCARD_REQUEST "pass-discard-request" #define QCOW2_OPT_DISCARD_SNAPSHOT "pass-discard-snapshot" #define QCOW2_OPT_DISCARD_OTHER "pass-discard-other" +#define QCOW2_OPT_DISCARD_NO_UNREF "discard-no-unref" #define QCOW2_OPT_OVERLAP "overlap-check" #define QCOW2_OPT_OVERLAP_TEMPLATE "overlap-check.template" #define QCOW2_OPT_OVERLAP_MAIN_HEADER "overlap-check.main-header" @@ -385,6 +386,8 @@ typedef struct BDRVQcow2State { bool discard_passthrough[QCOW2_DISCARD_MAX]; + bool discard_no_unref; + int overlap_check; /* bitmask of Qcow2MetadataOverlap values */ bool signaled_corruption; diff --git a/qapi/block-core.json b/qapi/block-core.json index 98d9116dae..7e9446e49b 100644 --- a/qapi/block-core.json +++ b/qapi/block-core.json @@ -3478,6 +3478,17 @@ # @pass-discard-other: whether discard requests for the data source # should be issued on other occasions where a cluster gets freed # +# @discard-no-unref: when enabled, discards from the guest will not cause +# cluster allocations to be relinquished. This prevents qcow2 fragmentation +# that would be caused by such discards. Besides potential +# performance degradation, such fragmentation can lead to increased +# allocation of clusters past the end of the image file, +# resulting in image files whose file length can grow much larger +# than their guest disk size would suggest. +# If image file length is of concern (e.g. when storing qcow2 +# images directly on block devices), you should consider enabling +# this option. (since 8.1) +# # @overlap-check: which overlap checks to perform for writes to the # image, defaults to 'cached' (since 2.2) # @@ -3516,6 +3527,7 @@ '*pass-discard-request': 'bool', '*pass-discard-snapshot': 'bool', '*pass-discard-other': 'bool', + '*discard-no-unref': 'bool', '*overlap-check': 'Qcow2OverlapChecks', '*cache-size': 'int', '*l2-cache-size': 'int', diff --git a/qemu-options.hx b/qemu-options.hx index b37eb9662b..b57489d7ca 100644 --- a/qemu-options.hx +++ b/qemu-options.hx @@ -1431,6 +1431,18 @@ SRST issued on other occasions where a cluster gets freed (on/off; default: off) + ``discard-no-unref`` + When enabled, discards from the guest will not cause cluster + allocations to be relinquished. This prevents qcow2 fragmentation + that would be caused by such discards. Besides potential + performance degradation, such fragmentation can lead to increased + allocation of clusters past the end of the image file, + resulting in image files whose file length can grow much larger + than their guest disk size would suggest. + If image file length is of concern (e.g. when storing qcow2 + images directly on block devices), you should consider enabling + this option. + ``overlap-check`` Which overlap checks to perform for writes to the image (none/constant/cached/all; default: cached). For details or
When we for example have a sparse qcow2 image and discard: unmap is enabled, there can be a lot of fragmentation in the image after some time. Especially on VM's that do a lot of writes/deletes. This causes the qcow2 image to grow even over 110% of its virtual size, because the free gaps in the image get too small to allocate new continuous clusters. So it allocates new space at the end of the image. Disabling discard is not an option, as discard is needed to keep the incremental backup size as low as possible. Without discard, the incremental backups would become large, as qemu thinks it's just dirty blocks but it doesn't know the blocks are unneeded. So we need to avoid fragmentation but also 'empty' the unneeded blocks in the image to have a small incremental backup. In addition, we also want to send the discards further down the stack, so the underlying blocks are still discarded. Therefor we introduce a new qcow2 option "discard-no-unref". When setting this option to true, discards will no longer have the qcow2 driver relinquish cluster allocations. Other than that, the request is handled as normal: All clusters in range are marked as zero, and, if pass-discard-request is true, it is passed further down the stack. The only difference is that the now-zero clusters are preallocated instead of being unallocated. This will avoid fragmentation on the qcow2 image. Fixes: https://gitlab.com/qemu-project/qemu/-/issues/1621 Signed-off-by: Jean-Louis Dupond <jean-louis@dupond.be> --- block/qcow2-cluster.c | 32 ++++++++++++++++++++++++++++---- block/qcow2.c | 18 ++++++++++++++++++ block/qcow2.h | 3 +++ qapi/block-core.json | 12 ++++++++++++ qemu-options.hx | 12 ++++++++++++ 5 files changed, 73 insertions(+), 4 deletions(-)