Message ID | 20220425215055.611825-6-leobras@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | MSG_ZEROCOPY + multifd | expand |
On Mon, Apr 25, 2022 at 06:50:54PM -0300, Leonardo Bras wrote: > Even though multifd_send_sync_main() currently emits error_reports, it's > callers don't really check it before continuing. > > Change multifd_send_sync_main() to return -1 on error and 0 on success. > Also change all it's callers to make use of this change and possibly fail > earlier. > > (This change is important to next patch on multifd zero copy > implementation, to make it sure an error in zero-copy flush does not go > unnoticed. > > Signed-off-by: Leonardo Bras <leobras@redhat.com> > --- > migration/multifd.h | 2 +- > migration/multifd.c | 10 ++++++---- > migration/ram.c | 29 ++++++++++++++++++++++------- > 3 files changed, 29 insertions(+), 12 deletions(-) Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> With regards, Daniel
On Mon, Apr 25, 2022 at 06:50:54PM -0300, Leonardo Bras wrote: > Even though multifd_send_sync_main() currently emits error_reports, it's > callers don't really check it before continuing. > > Change multifd_send_sync_main() to return -1 on error and 0 on success. > Also change all it's callers to make use of this change and possibly fail > earlier. > > (This change is important to next patch on multifd zero copy > implementation, to make it sure an error in zero-copy flush does not go > unnoticed. > > Signed-off-by: Leonardo Bras <leobras@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com>
On Tue, Apr 26, 2022 at 9:59 AM Peter Xu <peterx@redhat.com> wrote: > > On Mon, Apr 25, 2022 at 06:50:54PM -0300, Leonardo Bras wrote: > > Even though multifd_send_sync_main() currently emits error_reports, it's > > callers don't really check it before continuing. > > > > Change multifd_send_sync_main() to return -1 on error and 0 on success. > > Also change all it's callers to make use of this change and possibly fail > > earlier. > > > > (This change is important to next patch on multifd zero copy > > implementation, to make it sure an error in zero-copy flush does not go > > unnoticed. > > > > Signed-off-by: Leonardo Bras <leobras@redhat.com> > > Reviewed-by: Peter Xu <peterx@redhat.com> > > -- > Peter Xu > Thanks Peter! Best regards, Leo
diff --git a/migration/multifd.h b/migration/multifd.h index 7d0effcb03..bcf5992945 100644 --- a/migration/multifd.h +++ b/migration/multifd.h @@ -20,7 +20,7 @@ int multifd_load_cleanup(Error **errp); bool multifd_recv_all_channels_created(void); bool multifd_recv_new_channel(QIOChannel *ioc, Error **errp); void multifd_recv_sync_main(void); -void multifd_send_sync_main(QEMUFile *f); +int multifd_send_sync_main(QEMUFile *f); int multifd_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset); /* Multifd Compression flags */ diff --git a/migration/multifd.c b/migration/multifd.c index 2a8c8570c3..15fb668e64 100644 --- a/migration/multifd.c +++ b/migration/multifd.c @@ -566,17 +566,17 @@ void multifd_save_cleanup(void) multifd_send_state = NULL; } -void multifd_send_sync_main(QEMUFile *f) +int multifd_send_sync_main(QEMUFile *f) { int i; if (!migrate_use_multifd()) { - return; + return 0; } if (multifd_send_state->pages->num) { if (multifd_send_pages(f) < 0) { error_report("%s: multifd_send_pages fail", __func__); - return; + return -1; } } for (i = 0; i < migrate_multifd_channels(); i++) { @@ -589,7 +589,7 @@ void multifd_send_sync_main(QEMUFile *f) if (p->quit) { error_report("%s: channel %d has already quit", __func__, i); qemu_mutex_unlock(&p->mutex); - return; + return -1; } p->packet_num = multifd_send_state->packet_num++; @@ -608,6 +608,8 @@ void multifd_send_sync_main(QEMUFile *f) qemu_sem_wait(&p->sem_sync); } trace_multifd_send_sync_main(multifd_send_state->packet_num); + + return 0; } static void *multifd_send_thread(void *opaque) diff --git a/migration/ram.c b/migration/ram.c index a2489a2699..5f5e37f64d 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -2909,6 +2909,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque) { RAMState **rsp = opaque; RAMBlock *block; + int ret; if (compress_threads_save_setup()) { return -1; @@ -2943,7 +2944,11 @@ static int ram_save_setup(QEMUFile *f, void *opaque) ram_control_before_iterate(f, RAM_CONTROL_SETUP); ram_control_after_iterate(f, RAM_CONTROL_SETUP); - multifd_send_sync_main(f); + ret = multifd_send_sync_main(f); + if (ret < 0) { + return ret; + } + qemu_put_be64(f, RAM_SAVE_FLAG_EOS); qemu_fflush(f); @@ -3052,7 +3057,11 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) out: if (ret >= 0 && migration_is_setup_or_active(migrate_get_current()->state)) { - multifd_send_sync_main(rs->f); + ret = multifd_send_sync_main(rs->f); + if (ret < 0) { + return ret; + } + qemu_put_be64(f, RAM_SAVE_FLAG_EOS); qemu_fflush(f); ram_transferred_add(8); @@ -3112,13 +3121,19 @@ static int ram_save_complete(QEMUFile *f, void *opaque) ram_control_after_iterate(f, RAM_CONTROL_FINISH); } - if (ret >= 0) { - multifd_send_sync_main(rs->f); - qemu_put_be64(f, RAM_SAVE_FLAG_EOS); - qemu_fflush(f); + if (ret < 0) { + return ret; } - return ret; + ret = multifd_send_sync_main(rs->f); + if (ret < 0) { + return ret; + } + + qemu_put_be64(f, RAM_SAVE_FLAG_EOS); + qemu_fflush(f); + + return 0; } static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
Even though multifd_send_sync_main() currently emits error_reports, it's callers don't really check it before continuing. Change multifd_send_sync_main() to return -1 on error and 0 on success. Also change all it's callers to make use of this change and possibly fail earlier. (This change is important to next patch on multifd zero copy implementation, to make it sure an error in zero-copy flush does not go unnoticed. Signed-off-by: Leonardo Bras <leobras@redhat.com> --- migration/multifd.h | 2 +- migration/multifd.c | 10 ++++++---- migration/ram.c | 29 ++++++++++++++++++++++------- 3 files changed, 29 insertions(+), 12 deletions(-)