Message ID | 20180612085009.17594-2-bala24@linux.vnet.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
* Balamuruhan S (bala24@linux.vnet.ibm.com) wrote: > expected_downtime value is not accurate with dirty_pages_rate * page_size, > using ram_bytes_remaining() would yeild it resonable. > > consider to read the remaining ram just after having updated the dirty > pages count later migration_bitmap_sync_range() in migration_bitmap_sync() > and reuse the `remaining` field in ram_counters to hold ram_bytes_remaining() > for calculating expected_downtime. > > Reported-by: Michael Roth <mdroth@linux.vnet.ibm.com> > Signed-off-by: Balamuruhan S <bala24@linux.vnet.ibm.com> > Signed-off-by: Laurent Vivier <lvivier@redhat.com> > --- > migration/migration.c | 3 +-- > migration/ram.c | 1 + > 2 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/migration/migration.c b/migration/migration.c > index ea9a6cbb87..cb14dadb26 100644 > --- a/migration/migration.c > +++ b/migration/migration.c > @@ -2739,8 +2739,7 @@ static void migration_update_counters(MigrationState *s, > * recalculate. 10000 is a small enough number for our purposes > */ > if (ram_counters.dirty_pages_rate && transferred > 10000) { > - s->expected_downtime = ram_counters.dirty_pages_rate * > - qemu_target_page_size() / bandwidth; > + s->expected_downtime = ram_counters.remaining / bandwidth; > } > > qemu_file_reset_rate_limit(s->to_dst_file); > diff --git a/migration/ram.c b/migration/ram.c > index a500015a2f..a94a2b829e 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -1159,6 +1159,7 @@ static void migration_bitmap_sync(RAMState *rs) > RAMBLOCK_FOREACH_MIGRATABLE(block) { > migration_bitmap_sync_range(rs, block, 0, block->used_length); > } > + ram_counters.remaining = ram_bytes_remaining(); > rcu_read_unlock(); > qemu_mutex_unlock(&rs->bitmap_mutex); OK, that's interesting. One thing to note is that migration_bitmap_sync isn't just called when we've run out of dirty pages and need to see if there are any fresh ones. In the case where we hit the bandwidth limit, then we call it whenever we go back through ram_save_pending even though the current remaining isn't 0 yet (it's got to be below the threshold for us to resync) - so you may over estimate a bit? Anyway, I think it's worth a go; Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> > -- > 2.14.3 > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
diff --git a/migration/migration.c b/migration/migration.c index ea9a6cbb87..cb14dadb26 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -2739,8 +2739,7 @@ static void migration_update_counters(MigrationState *s, * recalculate. 10000 is a small enough number for our purposes */ if (ram_counters.dirty_pages_rate && transferred > 10000) { - s->expected_downtime = ram_counters.dirty_pages_rate * - qemu_target_page_size() / bandwidth; + s->expected_downtime = ram_counters.remaining / bandwidth; } qemu_file_reset_rate_limit(s->to_dst_file); diff --git a/migration/ram.c b/migration/ram.c index a500015a2f..a94a2b829e 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1159,6 +1159,7 @@ static void migration_bitmap_sync(RAMState *rs) RAMBLOCK_FOREACH_MIGRATABLE(block) { migration_bitmap_sync_range(rs, block, 0, block->used_length); } + ram_counters.remaining = ram_bytes_remaining(); rcu_read_unlock(); qemu_mutex_unlock(&rs->bitmap_mutex);