From patchwork Tue Jan 24 07:59:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhanghailiang X-Patchwork-Id: 9534263 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 744C56042D for ; Tue, 24 Jan 2017 08:02:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6575620453 for ; Tue, 24 Jan 2017 08:02:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 59A5622701; Tue, 24 Jan 2017 08:02:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C539620453 for ; Tue, 24 Jan 2017 08:02:15 +0000 (UTC) Received: from localhost ([::1]:46773 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cVw3S-0000xr-WB for patchwork-qemu-devel@patchwork.kernel.org; Tue, 24 Jan 2017 03:02:15 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45773) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cVw1z-0008WJ-5g for qemu-devel@nongnu.org; Tue, 24 Jan 2017 03:00:47 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cVw1y-0002xF-0y for qemu-devel@nongnu.org; Tue, 24 Jan 2017 03:00:43 -0500 Received: from szxga03-in.huawei.com ([119.145.14.66]:1252) by eggs.gnu.org with esmtps (TLS1.0:RSA_ARCFOUR_SHA1:16) (Exim 4.71) (envelope-from ) id 1cVw1p-0002pj-FK; Tue, 24 Jan 2017 03:00:34 -0500 Received: from 172.24.1.47 (EHLO SZXEML429-HUB.china.huawei.com) ([172.24.1.47]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id COK71297; Tue, 24 Jan 2017 16:00:11 +0800 (CST) Received: from localhost (10.177.24.212) by SZXEML429-HUB.china.huawei.com (10.82.67.184) with Microsoft SMTP Server id 14.3.235.1; Tue, 24 Jan 2017 16:00:02 +0800 From: zhanghailiang To: Date: Tue, 24 Jan 2017 15:59:52 +0800 Message-ID: <1485244792-11248-1-git-send-email-zhang.zhanghailiang@huawei.com> X-Mailer: git-send-email 2.7.2.windows.1 MIME-Version: 1.0 X-Originating-IP: [10.177.24.212] X-CFilter-Loop: Reflected X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x-2.6.x [generic] [fuzzy] X-Received-From: 119.145.14.66 Subject: [Qemu-devel] [PATCH v2] migration: re-active images while migration been canceled after inactive them X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, zhanghailiang , qemu-block@nongnu.org, quintela@redhat.com, dgilbert@redhat.com, amit.shah@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP commit fe904ea8242cbae2d7e69c052c754b8f5f1ba1d6 fixed a case which migration aborted QEMU because it didn't regain the control of images while some errors happened. Actually, there are another two cases can trigger the same error reports: " bdrv_co_do_pwritev: Assertion `!(bs->open_flags & 0x0800)' failed", Case 1, codes path: migration_thread() migration_completion() bdrv_inactivate_all() ----------------> inactivate images qemu_savevm_state_complete_precopy() socket_writev_buffer() --------> error because destination fails qemu_fflush() ----------------> set error on migration stream -> qmp_migrate_cancel() ----------------> user cancelled migration concurrently -> migrate_set_state() ------------------> set migrate CANCELLIN migration_completion() -----------------> go on to fail_invalidate if (s->state == MIGRATION_STATUS_ACTIVE) -> Jump this branch Case 2, codes path: migration_thread() migration_completion() bdrv_inactivate_all() ----------------> inactivate images migreation_completion() finished -> qmp_migrate_cancel() ---------------> user cancelled migration concurrently qemu_mutex_lock_iothread(); qemu_bh_schedule (s->cleanup_bh); As we can see from above, qmp_migrate_cancel can slip in whenever migration_thread does not hold the global lock. If this happens after bdrv_inactive_all() been called, the above error reports will appear. To prevent this, we can call bdrv_invalidate_cache_all() in qmp_migrate_cancel() directly if we find images become inactive. Besides, bdrv_invalidate_cache_all() in migration_completion() doesn't have the protection of big lock, fix it by add the missing qemu_mutex_lock_iothread(); Signed-off-by: zhanghailiang Reviewed-by: Dr. David Alan Gilbert Reviewed-by: Stefan Hajnoczi --- v2: - Fix a bug introduced by commit fe904 which didn't get big lock before call bdrv_invalidate_cache_all. (Suggested by Dave) --- include/migration/migration.h | 3 +++ migration/migration.c | 15 +++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/include/migration/migration.h b/include/migration/migration.h index c309d23..2d5b724 100644 --- a/include/migration/migration.h +++ b/include/migration/migration.h @@ -177,6 +177,9 @@ struct MigrationState /* Flag set once the migration thread is running (and needs joining) */ bool migration_thread_running; + /* Flag set once the migration thread called bdrv_inactivate_all */ + bool block_inactive; + /* Queue of outstanding page requests from the destination */ QemuMutex src_page_req_mutex; QSIMPLEQ_HEAD(src_page_requests, MigrationSrcPageRequest) src_page_requests; diff --git a/migration/migration.c b/migration/migration.c index f498ab8..5b50afe 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1006,6 +1006,16 @@ static void migrate_fd_cancel(MigrationState *s) if (s->state == MIGRATION_STATUS_CANCELLING && f) { qemu_file_shutdown(f); } + if (s->state == MIGRATION_STATUS_CANCELLING && s->block_inactive) { + Error *local_err = NULL; + + bdrv_invalidate_cache_all(&local_err); + if (local_err) { + error_report_err(local_err); + } else { + s->block_inactive = false; + } + } } void add_migration_state_change_notifier(Notifier *notify) @@ -1705,6 +1715,7 @@ static void migration_completion(MigrationState *s, int current_active_state, if (ret >= 0) { qemu_file_set_rate_limit(s->to_dst_file, INT64_MAX); qemu_savevm_state_complete_precopy(s->to_dst_file, false); + s->block_inactive = true; } } qemu_mutex_unlock_iothread(); @@ -1755,10 +1766,14 @@ fail_invalidate: if (s->state == MIGRATION_STATUS_ACTIVE) { Error *local_err = NULL; + qemu_mutex_lock_iothread(); bdrv_invalidate_cache_all(&local_err); if (local_err) { error_report_err(local_err); + } else { + s->block_inactive = false; } + qemu_mutex_unlock_iothread(); } fail: