Message ID | 20220704202315.507145-3-leobras@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Zero copy improvements (QIOChannel + multifd) | expand |
Leonardo Bras <leobras@redhat.com> writes: > Signed-off-by: Leonardo Bras <leobras@redhat.com> Acked-by: Markus Armbruster <armbru@redhat.com>
On Mon, Jul 04, 2022 at 05:23:14PM -0300, Leonardo Bras wrote: > Signed-off-by: Leonardo Bras <leobras@redhat.com> > --- > qapi/migration.json | 7 ++++++- > migration/migration.c | 2 ++ > monitor/hmp-cmds.c | 4 ++++ > 3 files changed, 12 insertions(+), 1 deletion(-) Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> With regards, Daniel
On Mon, Jul 04, 2022 at 05:23:14PM -0300, Leonardo Bras wrote: > Signed-off-by: Leonardo Bras <leobras@redhat.com> > --- > qapi/migration.json | 7 ++++++- > migration/migration.c | 2 ++ > monitor/hmp-cmds.c | 4 ++++ > 3 files changed, 12 insertions(+), 1 deletion(-) > > diff --git a/qapi/migration.json b/qapi/migration.json > index 7102e474a6..fed08b9b88 100644 > --- a/qapi/migration.json > +++ b/qapi/migration.json > @@ -55,6 +55,10 @@ > # @postcopy-bytes: The number of bytes sent during the post-copy phase > # (since 7.0). > # > +# @dirty-sync-missed-zero-copy: Number of times dirty RAM synchronization could > +# not avoid copying zero pages. This is between 0 Avoid copying zero pages? Isn't this for counting MSG_ZEROCOPY fallbacks? > +# and @dirty-sync-count * @multifd-channels. I'd not name it as "dirty-sync-*" because fundamentally the accounting is not doing like that (more in latter patch). I also think we should squash patch 2/3 as patch 3 only started to provide meaningful values. > +# (since 7.1) > # Since: 0.14 > ## > { 'struct': 'MigrationStats', > @@ -65,7 +69,8 @@ > 'postcopy-requests' : 'int', 'page-size' : 'int', > 'multifd-bytes' : 'uint64', 'pages-per-second' : 'uint64', > 'precopy-bytes' : 'uint64', 'downtime-bytes' : 'uint64', > - 'postcopy-bytes' : 'uint64' } } > + 'postcopy-bytes' : 'uint64', > + 'dirty-sync-missed-zero-copy' : 'uint64' } } > > ## > # @XBZRLECacheStats: > diff --git a/migration/migration.c b/migration/migration.c > index 78f5057373..048f7f8bdb 100644 > --- a/migration/migration.c > +++ b/migration/migration.c > @@ -1027,6 +1027,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s) > info->ram->normal_bytes = ram_counters.normal * page_size; > info->ram->mbps = s->mbps; > info->ram->dirty_sync_count = ram_counters.dirty_sync_count; > + info->ram->dirty_sync_missed_zero_copy = > + ram_counters.dirty_sync_missed_zero_copy; > info->ram->postcopy_requests = ram_counters.postcopy_requests; > info->ram->page_size = page_size; > info->ram->multifd_bytes = ram_counters.multifd_bytes; > diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c > index ca98df0495..5f3be9e405 100644 > --- a/monitor/hmp-cmds.c > +++ b/monitor/hmp-cmds.c > @@ -307,6 +307,10 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict) > monitor_printf(mon, "postcopy ram: %" PRIu64 " kbytes\n", > info->ram->postcopy_bytes >> 10); > } > + if (info->ram->dirty_sync_missed_zero_copy) { > + monitor_printf(mon, "missed zero-copy on: %" PRIu64 " iterations\n", > + info->ram->dirty_sync_missed_zero_copy); I suggest we don't call it "iterations" because it's not the generic mean of iterations. > + } > } > > if (info->has_disk) { > -- > 2.36.1 >
Hello Peter, On Thu, Jul 7, 2022 at 2:54 PM Peter Xu <peterx@redhat.com> wrote: > > On Mon, Jul 04, 2022 at 05:23:14PM -0300, Leonardo Bras wrote: > > Signed-off-by: Leonardo Bras <leobras@redhat.com> > > --- > > qapi/migration.json | 7 ++++++- > > migration/migration.c | 2 ++ > > monitor/hmp-cmds.c | 4 ++++ > > 3 files changed, 12 insertions(+), 1 deletion(-) > > > > diff --git a/qapi/migration.json b/qapi/migration.json > > index 7102e474a6..fed08b9b88 100644 > > --- a/qapi/migration.json > > +++ b/qapi/migration.json > > @@ -55,6 +55,10 @@ > > # @postcopy-bytes: The number of bytes sent during the post-copy phase > > # (since 7.0). > > # > > +# @dirty-sync-missed-zero-copy: Number of times dirty RAM synchronization could > > +# not avoid copying zero pages. This is between 0 > > Avoid copying zero pages? Isn't this for counting MSG_ZEROCOPY fallbacks? Yes, sorry, I think I got confused at some point between some cuts & pastes. It should be "not avoid copying dirty pages." I will fix that on a V4. > > > +# and @dirty-sync-count * @multifd-channels. > > I'd not name it as "dirty-sync-*" because fundamentally the accounting is > not doing like that (more in latter patch). Ok, I will take a look & answer there. > I also think we should squash > patch 2/3 as patch 3 only started to provide meaningful values. IIRC Previously in zero-copy-send implementation, I was asked to keep the property/capability in a separated patch in order to make it easier to review. So I thought it would be helpful now. > > > +# (since 7.1) > > # Since: 0.14 > > ## > > { 'struct': 'MigrationStats', > > @@ -65,7 +69,8 @@ > > 'postcopy-requests' : 'int', 'page-size' : 'int', > > 'multifd-bytes' : 'uint64', 'pages-per-second' : 'uint64', > > 'precopy-bytes' : 'uint64', 'downtime-bytes' : 'uint64', > > - 'postcopy-bytes' : 'uint64' } } > > + 'postcopy-bytes' : 'uint64', > > + 'dirty-sync-missed-zero-copy' : 'uint64' } } > > > > ## > > # @XBZRLECacheStats: > > diff --git a/migration/migration.c b/migration/migration.c > > index 78f5057373..048f7f8bdb 100644 > > --- a/migration/migration.c > > +++ b/migration/migration.c > > @@ -1027,6 +1027,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s) > > info->ram->normal_bytes = ram_counters.normal * page_size; > > info->ram->mbps = s->mbps; > > info->ram->dirty_sync_count = ram_counters.dirty_sync_count; > > + info->ram->dirty_sync_missed_zero_copy = > > + ram_counters.dirty_sync_missed_zero_copy; > > info->ram->postcopy_requests = ram_counters.postcopy_requests; > > info->ram->page_size = page_size; > > info->ram->multifd_bytes = ram_counters.multifd_bytes; > > diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c > > index ca98df0495..5f3be9e405 100644 > > --- a/monitor/hmp-cmds.c > > +++ b/monitor/hmp-cmds.c > > @@ -307,6 +307,10 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict) > > monitor_printf(mon, "postcopy ram: %" PRIu64 " kbytes\n", > > info->ram->postcopy_bytes >> 10); > > } > > + if (info->ram->dirty_sync_missed_zero_copy) { > > + monitor_printf(mon, "missed zero-copy on: %" PRIu64 " iterations\n", > > + info->ram->dirty_sync_missed_zero_copy); > > I suggest we don't call it "iterations" because it's not the generic mean > of iterations. Yeah, I thought that too, but I could not think on anything better. What do you suggest instead? Best regards, Leo > > > + } > > } > > > > if (info->has_disk) { > > -- > > 2.36.1 > > > > -- > Peter Xu >
On Thu, Jul 07, 2022 at 04:50:47PM -0300, Leonardo Bras Soares Passos wrote: > > I also think we should squash > > patch 2/3 as patch 3 only started to provide meaningful values. > > IIRC Previously in zero-copy-send implementation, I was asked to keep the > property/capability in a separated patch in order to make it easier to review. > So I thought it would be helpful now. Ah, that's fine then. > > > diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c > > > index ca98df0495..5f3be9e405 100644 > > > --- a/monitor/hmp-cmds.c > > > +++ b/monitor/hmp-cmds.c > > > @@ -307,6 +307,10 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict) > > > monitor_printf(mon, "postcopy ram: %" PRIu64 " kbytes\n", > > > info->ram->postcopy_bytes >> 10); > > > } > > > + if (info->ram->dirty_sync_missed_zero_copy) { > > > + monitor_printf(mon, "missed zero-copy on: %" PRIu64 " iterations\n", > > > + info->ram->dirty_sync_missed_zero_copy); > > > > I suggest we don't call it "iterations" because it's not the generic mean > > of iterations. > > Yeah, I thought that too, but I could not think on anything better. > What do you suggest instead? "Zero-copy-send fallbacks happened: xxx times\n"?
On Thu, 2022-07-07 at 15:56 -0400, Peter Xu wrote: > On Thu, Jul 07, 2022 at 04:50:47PM -0300, Leonardo Bras Soares Passos wrote: > > > I also think we should squash > > > patch 2/3 as patch 3 only started to provide meaningful values. > > > > IIRC Previously in zero-copy-send implementation, I was asked to keep the > > property/capability in a separated patch in order to make it easier to > > review. > > So I thought it would be helpful now. > > Ah, that's fine then. > > > > > diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c > > > > index ca98df0495..5f3be9e405 100644 > > > > --- a/monitor/hmp-cmds.c > > > > +++ b/monitor/hmp-cmds.c > > > > @@ -307,6 +307,10 @@ void hmp_info_migrate(Monitor *mon, const QDict > > > > *qdict) > > > > monitor_printf(mon, "postcopy ram: %" PRIu64 " kbytes\n", > > > > info->ram->postcopy_bytes >> 10); > > > > } > > > > + if (info->ram->dirty_sync_missed_zero_copy) { > > > > + monitor_printf(mon, "missed zero-copy on: %" PRIu64 " > > > > iterations\n", > > > > + info->ram->dirty_sync_missed_zero_copy); > > > > > > I suggest we don't call it "iterations" because it's not the generic mean > > > of iterations. > > > > Yeah, I thought that too, but I could not think on anything better. > > What do you suggest instead? > > "Zero-copy-send fallbacks happened: xxx times\n"? Oh, yeah, that will work. I was thinking on keeping the pattern and ended up thinking what was the correct unit. But this is much simpler and work better. Best regards, Leo >
diff --git a/qapi/migration.json b/qapi/migration.json index 7102e474a6..fed08b9b88 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -55,6 +55,10 @@ # @postcopy-bytes: The number of bytes sent during the post-copy phase # (since 7.0). # +# @dirty-sync-missed-zero-copy: Number of times dirty RAM synchronization could +# not avoid copying zero pages. This is between 0 +# and @dirty-sync-count * @multifd-channels. +# (since 7.1) # Since: 0.14 ## { 'struct': 'MigrationStats', @@ -65,7 +69,8 @@ 'postcopy-requests' : 'int', 'page-size' : 'int', 'multifd-bytes' : 'uint64', 'pages-per-second' : 'uint64', 'precopy-bytes' : 'uint64', 'downtime-bytes' : 'uint64', - 'postcopy-bytes' : 'uint64' } } + 'postcopy-bytes' : 'uint64', + 'dirty-sync-missed-zero-copy' : 'uint64' } } ## # @XBZRLECacheStats: diff --git a/migration/migration.c b/migration/migration.c index 78f5057373..048f7f8bdb 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1027,6 +1027,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s) info->ram->normal_bytes = ram_counters.normal * page_size; info->ram->mbps = s->mbps; info->ram->dirty_sync_count = ram_counters.dirty_sync_count; + info->ram->dirty_sync_missed_zero_copy = + ram_counters.dirty_sync_missed_zero_copy; info->ram->postcopy_requests = ram_counters.postcopy_requests; info->ram->page_size = page_size; info->ram->multifd_bytes = ram_counters.multifd_bytes; diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c index ca98df0495..5f3be9e405 100644 --- a/monitor/hmp-cmds.c +++ b/monitor/hmp-cmds.c @@ -307,6 +307,10 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict) monitor_printf(mon, "postcopy ram: %" PRIu64 " kbytes\n", info->ram->postcopy_bytes >> 10); } + if (info->ram->dirty_sync_missed_zero_copy) { + monitor_printf(mon, "missed zero-copy on: %" PRIu64 " iterations\n", + info->ram->dirty_sync_missed_zero_copy); + } } if (info->has_disk) {
Signed-off-by: Leonardo Bras <leobras@redhat.com> --- qapi/migration.json | 7 ++++++- migration/migration.c | 2 ++ monitor/hmp-cmds.c | 4 ++++ 3 files changed, 12 insertions(+), 1 deletion(-)