From patchwork Fri Feb 15 17:45:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Kotov X-Patchwork-Id: 10815617 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C3C716C2 for ; Fri, 15 Feb 2019 17:47:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B2BB82FCEC for ; Fri, 15 Feb 2019 17:47:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A70782FCFE; Fri, 15 Feb 2019 17:47:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EFBEB2FCEC for ; Fri, 15 Feb 2019 17:47:34 +0000 (UTC) Received: from localhost ([127.0.0.1]:44028 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1guhaI-0006xi-9g for patchwork-qemu-devel@patchwork.kernel.org; Fri, 15 Feb 2019 12:47:34 -0500 Received: from eggs.gnu.org ([209.51.188.92]:44086) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1guhYz-000648-I4 for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:14 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1guhYy-0002HI-6V for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:13 -0500 Received: from forwardcorp1g.cmail.yandex.net ([2a02:6b8:0:1465::fd]:60944) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1guhYx-0002F3-RQ for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:12 -0500 Received: from mxbackcorp1j.mail.yandex.net (mxbackcorp1j.mail.yandex.net [IPv6:2a02:6b8:0:1619::162]) by forwardcorp1g.cmail.yandex.net (Yandex) with ESMTP id D3D1D21800; Fri, 15 Feb 2019 20:46:07 +0300 (MSK) Received: from smtpcorp1o.mail.yandex.net (smtpcorp1o.mail.yandex.net [2a02:6b8:0:1a2d::30]) by mxbackcorp1j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id USHtCHSVWp-k768Craa; Fri, 15 Feb 2019 20:46:07 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1550252767; bh=t9p433/OU8ukx15S5zQUyho8igYHv6xKD+hGscfLUFs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=holNEy517efGFcwKrwIJZkWOXdvo4NOGU9VUhuccTZVflbkUTiL2igKUtLgFCRzJE T/7sTW6AHjNOW7EK3MF0j53PBPAyQpgFi7FI9cD7HQPh+pbo2ke5WoJcMLFnthnDW5 FO58Q54SV3aOiQ7JDjn/Tp0fp57vyezsK9rfIw10= Authentication-Results: mxbackcorp1j.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-red.dhcp.yndx.net (dynamic-red.dhcp.yndx.net [2a02:6b8:0:40c:e1bb:a1a7:a235:d6b4]) by smtpcorp1o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id NdnWpM22PP-k6jm4lHB; Fri, 15 Feb 2019 20:46:07 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Yury Kotov To: "Dr . David Alan Gilbert" , Eduardo Habkost , Eric Blake , Igor Mammedov , Juan Quintela , Laurent Vivier , Markus Armbruster , Paolo Bonzini , Peter Crosthwaite , Richard Henderson , Thomas Huth , qemu-devel@nongnu.org Date: Fri, 15 Feb 2019 20:45:44 +0300 Message-Id: <20190215174548.2630-2-yury-kotov@yandex-team.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190215174548.2630-1-yury-kotov@yandex-team.ru> References: <20190215174548.2630-1-yury-kotov@yandex-team.ru> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a02:6b8:0:1465::fd Subject: [Qemu-devel] [PATCH v3 1/5] exec: Change RAMBlockIterFunc definition X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, jiangshanlai@gmail.com, qemu-devel@lists.ewheeler.net, wrfsh@yandex-team.ru Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Currently, qemu_ram_foreach_* calls RAMBlockIterFunc with many block-specific arguments. But often iter func needs RAMBlock*. This refactoring is needed for fast access to RAMBlock flags from qemu_ram_foreach_block's callback. The only way to achieve this now is to call qemu_ram_block_from_host (which also enumerates blocks). So, this patch reduces complexity of qemu_ram_foreach_block() -> cb() -> qemu_ram_block_from_host() from O(n^2) to O(n). Fix RAMBlockIterFunc definition and add some functions to read RAMBlock* fields witch were passed. Signed-off-by: Yury Kotov Reviewed-by: Dr. David Alan Gilbert --- exec.c | 21 +++++++++++++++++---- include/exec/cpu-common.h | 6 ++++-- migration/postcopy-ram.c | 36 +++++++++++++++++++++--------------- migration/rdma.c | 7 +++++-- stubs/ram-block.c | 15 +++++++++++++++ util/vfio-helpers.c | 6 +++--- 6 files changed, 65 insertions(+), 26 deletions(-) diff --git a/exec.c b/exec.c index 518064530b..bf71462929 100644 --- a/exec.c +++ b/exec.c @@ -1972,6 +1972,21 @@ const char *qemu_ram_get_idstr(RAMBlock *rb) return rb->idstr; } +void *qemu_ram_get_host_addr(RAMBlock *rb) +{ + return rb->host; +} + +ram_addr_t qemu_ram_get_offset(RAMBlock *rb) +{ + return rb->offset; +} + +ram_addr_t qemu_ram_get_used_length(RAMBlock *rb) +{ + return rb->used_length; +} + bool qemu_ram_is_shared(RAMBlock *rb) { return rb->flags & RAM_SHARED; @@ -3961,8 +3976,7 @@ int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque) rcu_read_lock(); RAMBLOCK_FOREACH(block) { - ret = func(block->idstr, block->host, block->offset, - block->used_length, opaque); + ret = func(block, opaque); if (ret) { break; } @@ -3981,8 +3995,7 @@ int qemu_ram_foreach_migratable_block(RAMBlockIterFunc func, void *opaque) if (!qemu_ram_is_migratable(block)) { continue; } - ret = func(block->idstr, block->host, block->offset, - block->used_length, opaque); + ret = func(block, opaque); if (ret) { break; } diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h index 63ec1f9b37..9cd1f94bc5 100644 --- a/include/exec/cpu-common.h +++ b/include/exec/cpu-common.h @@ -72,6 +72,9 @@ ram_addr_t qemu_ram_block_host_offset(RAMBlock *rb, void *host); void qemu_ram_set_idstr(RAMBlock *block, const char *name, DeviceState *dev); void qemu_ram_unset_idstr(RAMBlock *block); const char *qemu_ram_get_idstr(RAMBlock *rb); +void *qemu_ram_get_host_addr(RAMBlock *rb); +ram_addr_t qemu_ram_get_offset(RAMBlock *rb); +ram_addr_t qemu_ram_get_used_length(RAMBlock *rb); bool qemu_ram_is_shared(RAMBlock *rb); bool qemu_ram_is_uf_zeroable(RAMBlock *rb); void qemu_ram_set_uf_zeroable(RAMBlock *rb); @@ -116,8 +119,7 @@ void cpu_flush_icache_range(hwaddr start, hwaddr len); extern struct MemoryRegion io_mem_rom; extern struct MemoryRegion io_mem_notdirty; -typedef int (RAMBlockIterFunc)(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque); +typedef int (RAMBlockIterFunc)(RAMBlock *rb, void *opaque); int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque); int qemu_ram_foreach_migratable_block(RAMBlockIterFunc func, void *opaque); diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c index fa09dba534..b098816221 100644 --- a/migration/postcopy-ram.c +++ b/migration/postcopy-ram.c @@ -319,10 +319,10 @@ static bool ufd_check_and_apply(int ufd, MigrationIncomingState *mis) /* Callback from postcopy_ram_supported_by_host block iterator. */ -static int test_ramblock_postcopiable(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque) +static int test_ramblock_postcopiable(RAMBlock *rb, void *opaque) { - RAMBlock *rb = qemu_ram_block_by_name(block_name); + const char *block_name = qemu_ram_get_idstr(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); size_t pagesize = qemu_ram_pagesize(rb); if (length % pagesize) { @@ -443,9 +443,12 @@ out: * must be done right at the start prior to pre-copy. * opaque should be the MIS. */ -static int init_range(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque) +static int init_range(RAMBlock *rb, void *opaque) { + const char *block_name = qemu_ram_get_idstr(rb); + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t offset = qemu_ram_get_offset(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); trace_postcopy_init_range(block_name, host_addr, offset, length); /* @@ -465,9 +468,12 @@ static int init_range(const char *block_name, void *host_addr, * At the end of migration, undo the effects of init_range * opaque should be the MIS. */ -static int cleanup_range(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque) +static int cleanup_range(RAMBlock *rb, void *opaque) { + const char *block_name = qemu_ram_get_idstr(rb); + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t offset = qemu_ram_get_offset(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); MigrationIncomingState *mis = opaque; struct uffdio_range range_struct; trace_postcopy_cleanup_range(block_name, host_addr, offset, length); @@ -586,9 +592,12 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis) /* * Disable huge pages on an area */ -static int nhp_range(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque) +static int nhp_range(RAMBlock *rb, void *opaque) { + const char *block_name = qemu_ram_get_idstr(rb); + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t offset = qemu_ram_get_offset(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); trace_postcopy_nhp_range(block_name, host_addr, offset, length); /* @@ -626,15 +635,13 @@ int postcopy_ram_prepare_discard(MigrationIncomingState *mis) * opaque: MigrationIncomingState pointer * Returns 0 on success */ -static int ram_block_enable_notify(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, - void *opaque) +static int ram_block_enable_notify(RAMBlock *rb, void *opaque) { MigrationIncomingState *mis = opaque; struct uffdio_register reg_struct; - reg_struct.range.start = (uintptr_t)host_addr; - reg_struct.range.len = length; + reg_struct.range.start = (uintptr_t)qemu_ram_get_host_addr(rb); + reg_struct.range.len = qemu_ram_get_used_length(rb); reg_struct.mode = UFFDIO_REGISTER_MODE_MISSING; /* Now tell our userfault_fd that it's responsible for this area */ @@ -647,7 +654,6 @@ static int ram_block_enable_notify(const char *block_name, void *host_addr, return -1; } if (reg_struct.ioctls & ((__u64)1 << _UFFDIO_ZEROPAGE)) { - RAMBlock *rb = qemu_ram_block_by_name(block_name); qemu_ram_set_uf_zeroable(rb); } diff --git a/migration/rdma.c b/migration/rdma.c index 54a3c11540..7eb38ee764 100644 --- a/migration/rdma.c +++ b/migration/rdma.c @@ -624,9 +624,12 @@ static int rdma_add_block(RDMAContext *rdma, const char *block_name, * in advanced before the migration starts. This tells us where the RAM blocks * are so that we can register them individually. */ -static int qemu_rdma_init_one_block(const char *block_name, void *host_addr, - ram_addr_t block_offset, ram_addr_t length, void *opaque) +static int qemu_rdma_init_one_block(RAMBlock *rb, void *opaque) { + const char *block_name = qemu_ram_get_idstr(rb); + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t block_offset = qemu_ram_get_offset(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); return rdma_add_block(opaque, block_name, host_addr, block_offset, length); } diff --git a/stubs/ram-block.c b/stubs/ram-block.c index cfa5d8678f..73c0a3ee08 100644 --- a/stubs/ram-block.c +++ b/stubs/ram-block.c @@ -2,6 +2,21 @@ #include "exec/ramlist.h" #include "exec/cpu-common.h" +void *qemu_ram_get_host_addr(RAMBlock *rb) +{ + return 0; +} + +ram_addr_t qemu_ram_get_offset(RAMBlock *rb) +{ + return 0; +} + +ram_addr_t qemu_ram_get_used_length(RAMBlock *rb) +{ + return 0; +} + void ram_block_notifier_add(RAMBlockNotifier *n) { } diff --git a/util/vfio-helpers.c b/util/vfio-helpers.c index 342d4a2285..2367fe8f7f 100644 --- a/util/vfio-helpers.c +++ b/util/vfio-helpers.c @@ -391,10 +391,10 @@ static void qemu_vfio_ram_block_removed(RAMBlockNotifier *n, } } -static int qemu_vfio_init_ramblock(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, - void *opaque) +static int qemu_vfio_init_ramblock(RAMBlock *rb, void *opaque) { + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); int ret; QEMUVFIOState *s = opaque; From patchwork Fri Feb 15 17:45:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Kotov X-Patchwork-Id: 10815613 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 72AD66C2 for ; Fri, 15 Feb 2019 17:47:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 61AAE2FCF0 for ; Fri, 15 Feb 2019 17:47:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 54A7D2FCFE; Fri, 15 Feb 2019 17:47:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B9D922FCEC for ; Fri, 15 Feb 2019 17:47:33 +0000 (UTC) Received: from localhost ([127.0.0.1]:44024 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1guhaG-0006w0-Ap for patchwork-qemu-devel@patchwork.kernel.org; Fri, 15 Feb 2019 12:47:32 -0500 Received: from eggs.gnu.org ([209.51.188.92]:44062) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1guhYy-000645-Sk for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:13 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1guhYy-0002HB-3s for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:12 -0500 Received: from forwardcorp1j.cmail.yandex.net ([5.255.227.105]:55741) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1guhYx-0002FQ-Oq for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:12 -0500 Received: from mxbackcorp1j.mail.yandex.net (mxbackcorp1j.mail.yandex.net [IPv6:2a02:6b8:0:1619::162]) by forwardcorp1j.cmail.yandex.net (Yandex) with ESMTP id B437E213E1; Fri, 15 Feb 2019 20:46:08 +0300 (MSK) Received: from smtpcorp1o.mail.yandex.net (smtpcorp1o.mail.yandex.net [2a02:6b8:0:1a2d::30]) by mxbackcorp1j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id xze3JJbmm9-k86W4qeP; Fri, 15 Feb 2019 20:46:08 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1550252768; bh=IdUstYEhiZQDdu4Ck/4n1ZjgNQX1fmjHwd7rrkzdwuA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=dq7vQww7FhIkP9yttvCGxo8TVVBOyUtsuV0xYlp2rLf5NK81eJ8kIjDKImFbeGV9z ZIm+Y/TsT/7PvAsRQqPQJ6G0IfXHNsntvB2vZk0UdiOa9fEASFAqxsmYcgx/7H8gdP gls9DVzoOWFv3OcP+8gDccNAXZYvnb1gdAPI0Sjs= Authentication-Results: mxbackcorp1j.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-red.dhcp.yndx.net (dynamic-red.dhcp.yndx.net [2a02:6b8:0:40c:e1bb:a1a7:a235:d6b4]) by smtpcorp1o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id NdnWpM22PP-k7jC95rr; Fri, 15 Feb 2019 20:46:08 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Yury Kotov To: "Dr . David Alan Gilbert" , Eduardo Habkost , Eric Blake , Igor Mammedov , Juan Quintela , Laurent Vivier , Markus Armbruster , Paolo Bonzini , Peter Crosthwaite , Richard Henderson , Thomas Huth , qemu-devel@nongnu.org Date: Fri, 15 Feb 2019 20:45:45 +0300 Message-Id: <20190215174548.2630-3-yury-kotov@yandex-team.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190215174548.2630-1-yury-kotov@yandex-team.ru> References: <20190215174548.2630-1-yury-kotov@yandex-team.ru> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 5.255.227.105 Subject: [Qemu-devel] [PATCH v3 2/5] migration: Introduce ignore-shared capability X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, jiangshanlai@gmail.com, qemu-devel@lists.ewheeler.net, wrfsh@yandex-team.ru Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP We want to use local migration to update QEMU for running guests. In this case we don't need to migrate shared (file backed) RAM. So, add a capability to ignore such blocks during live migration. Signed-off-by: Yury Kotov Reviewed-by: Dr. David Alan Gilbert --- migration/migration.c | 14 ++++++++++++++ migration/migration.h | 1 + qapi/migration.json | 5 ++++- 3 files changed, 19 insertions(+), 1 deletion(-) diff --git a/migration/migration.c b/migration/migration.c index 37e06b76dc..0fadd97fb9 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -957,6 +957,11 @@ static bool migrate_caps_check(bool *cap_list, error_setg(errp, "Postcopy is not supported"); return false; } + + if (cap_list[MIGRATION_CAPABILITY_X_IGNORE_SHARED]) { + error_setg(errp, "Postcopy is not compatible with ignore-shared"); + return false; + } } return true; @@ -1983,6 +1988,15 @@ bool migrate_dirty_bitmaps(void) return s->enabled_capabilities[MIGRATION_CAPABILITY_DIRTY_BITMAPS]; } +bool migrate_ignore_shared(void) +{ + MigrationState *s; + + s = migrate_get_current(); + + return s->enabled_capabilities[MIGRATION_CAPABILITY_X_IGNORE_SHARED]; +} + bool migrate_use_events(void) { MigrationState *s; diff --git a/migration/migration.h b/migration/migration.h index dcd05d9f87..4211995c53 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -261,6 +261,7 @@ bool migrate_release_ram(void); bool migrate_postcopy_ram(void); bool migrate_zero_blocks(void); bool migrate_dirty_bitmaps(void); +bool migrate_ignore_shared(void); bool migrate_auto_converge(void); bool migrate_use_multifd(void); diff --git a/qapi/migration.json b/qapi/migration.json index 7a795ecc16..7105570cd3 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -409,13 +409,16 @@ # devices (and thus take locks) immediately at the end of migration. # (since 3.0) # +# @x-ignore-shared: If enabled, QEMU will not migrate shared memory (since 4.0) +# # Since: 1.2 ## { 'enum': 'MigrationCapability', 'data': ['xbzrle', 'rdma-pin-all', 'auto-converge', 'zero-blocks', 'compress', 'events', 'postcopy-ram', 'x-colo', 'release-ram', 'block', 'return-path', 'pause-before-switchover', 'x-multifd', - 'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate' ] } + 'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate', + 'x-ignore-shared' ] } ## # @MigrationCapabilityStatus: From patchwork Fri Feb 15 17:45:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Kotov X-Patchwork-Id: 10815641 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1579417D5 for ; Fri, 15 Feb 2019 17:51:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04CB22FD9C for ; Fri, 15 Feb 2019 17:51:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 016112FDA3; Fri, 15 Feb 2019 17:51:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.1 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,URG_BIZ autolearn=no version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F00472FDA6 for ; Fri, 15 Feb 2019 17:51:48 +0000 (UTC) Received: from localhost ([127.0.0.1]:44082 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1guheO-0000ht-1X for patchwork-qemu-devel@patchwork.kernel.org; Fri, 15 Feb 2019 12:51:48 -0500 Received: from eggs.gnu.org ([209.51.188.92]:44110) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1guhZ1-00064b-2n for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:16 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1guhYz-0002I4-AL for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:15 -0500 Received: from forwardcorp1j.cmail.yandex.net ([2a02:6b8:0:1630::190]:43569) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1guhYy-0002Fk-Ur for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:13 -0500 Received: from mxbackcorp1o.mail.yandex.net (mxbackcorp1o.mail.yandex.net [IPv6:2a02:6b8:0:1a2d::301]) by forwardcorp1j.cmail.yandex.net (Yandex) with ESMTP id DB7F2205DD; Fri, 15 Feb 2019 20:46:09 +0300 (MSK) Received: from smtpcorp1o.mail.yandex.net (smtpcorp1o.mail.yandex.net [2a02:6b8:0:1a2d::30]) by mxbackcorp1o.mail.yandex.net (nwsmtp/Yandex) with ESMTP id BAsrzwnu5Q-k9Aa55Rm; Fri, 15 Feb 2019 20:46:09 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1550252769; bh=n9YgHMiXCsWaC3fFLKY3TbVeDwoXhLjhkjBFaCEsEgA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=bZhhhFNkMMounGMR829aDILUBFZo7VrnrCaUnSyrZH3fjDnLNcJSuOsd10qOzC4gU WJjrSy4wqHeBI7o5CMZzRlHykcNN5Fj6YN+qCP8JIDrDhaLoa8I1oK/5x2khtNTEiF Mlcf3+d+7oX6zLIjGaLrkDOVGjGDk7gZzr2zpwX4= Authentication-Results: mxbackcorp1o.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-red.dhcp.yndx.net (dynamic-red.dhcp.yndx.net [2a02:6b8:0:40c:e1bb:a1a7:a235:d6b4]) by smtpcorp1o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id NdnWpM22PP-k8jaoxmY; Fri, 15 Feb 2019 20:46:09 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Yury Kotov To: "Dr . David Alan Gilbert" , Eduardo Habkost , Eric Blake , Igor Mammedov , Juan Quintela , Laurent Vivier , Markus Armbruster , Paolo Bonzini , Peter Crosthwaite , Richard Henderson , Thomas Huth , qemu-devel@nongnu.org Date: Fri, 15 Feb 2019 20:45:46 +0300 Message-Id: <20190215174548.2630-4-yury-kotov@yandex-team.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190215174548.2630-1-yury-kotov@yandex-team.ru> References: <20190215174548.2630-1-yury-kotov@yandex-team.ru> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a02:6b8:0:1630::190 Subject: [Qemu-devel] [PATCH v3 3/5] migration: Add an ability to ignore shared RAM blocks X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, jiangshanlai@gmail.com, qemu-devel@lists.ewheeler.net, wrfsh@yandex-team.ru Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP If ignore-shared capability is set then skip shared RAMBlocks during the RAM migration. Also, move qemu_ram_foreach_migratable_block (and rename) to the migration code, because it requires access to the migration capabilities. Signed-off-by: Yury Kotov Reviewed-by: Dr. David Alan Gilbert --- exec.c | 19 ------- include/exec/cpu-common.h | 1 - migration/migration.h | 4 +- migration/postcopy-ram.c | 12 ++--- migration/ram.c | 110 +++++++++++++++++++++++++++++--------- migration/rdma.c | 2 +- 6 files changed, 94 insertions(+), 54 deletions(-) diff --git a/exec.c b/exec.c index bf71462929..1d4f3784d6 100644 --- a/exec.c +++ b/exec.c @@ -3985,25 +3985,6 @@ int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque) return ret; } -int qemu_ram_foreach_migratable_block(RAMBlockIterFunc func, void *opaque) -{ - RAMBlock *block; - int ret = 0; - - rcu_read_lock(); - RAMBLOCK_FOREACH(block) { - if (!qemu_ram_is_migratable(block)) { - continue; - } - ret = func(block, opaque); - if (ret) { - break; - } - } - rcu_read_unlock(); - return ret; -} - /* * Unmap pages of memory from start to start+length such that * they a) read as 0, b) Trigger whatever fault mechanism diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h index 9cd1f94bc5..cef8b88a2a 100644 --- a/include/exec/cpu-common.h +++ b/include/exec/cpu-common.h @@ -122,7 +122,6 @@ extern struct MemoryRegion io_mem_notdirty; typedef int (RAMBlockIterFunc)(RAMBlock *rb, void *opaque); int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque); -int qemu_ram_foreach_migratable_block(RAMBlockIterFunc func, void *opaque); int ram_block_discard_range(RAMBlock *rb, uint64_t start, size_t length); #endif diff --git a/migration/migration.h b/migration/migration.h index 4211995c53..2c88f8a555 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -302,8 +302,10 @@ void migrate_send_rp_resume_ack(MigrationIncomingState *mis, uint32_t value); void dirty_bitmap_mig_before_vm_start(void); void init_dirty_bitmap_incoming_migration(void); +int foreach_not_ignored_block(RAMBlockIterFunc func, void *opaque); + #define qemu_ram_foreach_block \ - #warning "Use qemu_ram_foreach_block_migratable in migration code" + #warning "Use foreach_not_ignored_block in migration code" void migration_make_urgent_request(void); void migration_consume_urgent_request(void); diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c index b098816221..e2aa57a701 100644 --- a/migration/postcopy-ram.c +++ b/migration/postcopy-ram.c @@ -374,7 +374,7 @@ bool postcopy_ram_supported_by_host(MigrationIncomingState *mis) } /* We don't support postcopy with shared RAM yet */ - if (qemu_ram_foreach_migratable_block(test_ramblock_postcopiable, NULL)) { + if (foreach_not_ignored_block(test_ramblock_postcopiable, NULL)) { goto out; } @@ -508,7 +508,7 @@ static int cleanup_range(RAMBlock *rb, void *opaque) */ int postcopy_ram_incoming_init(MigrationIncomingState *mis) { - if (qemu_ram_foreach_migratable_block(init_range, NULL)) { + if (foreach_not_ignored_block(init_range, NULL)) { return -1; } @@ -550,7 +550,7 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis) return -1; } - if (qemu_ram_foreach_migratable_block(cleanup_range, mis)) { + if (foreach_not_ignored_block(cleanup_range, mis)) { return -1; } @@ -617,7 +617,7 @@ static int nhp_range(RAMBlock *rb, void *opaque) */ int postcopy_ram_prepare_discard(MigrationIncomingState *mis) { - if (qemu_ram_foreach_migratable_block(nhp_range, mis)) { + if (foreach_not_ignored_block(nhp_range, mis)) { return -1; } @@ -628,7 +628,7 @@ int postcopy_ram_prepare_discard(MigrationIncomingState *mis) /* * Mark the given area of RAM as requiring notification to unwritten areas - * Used as a callback on qemu_ram_foreach_migratable_block. + * Used as a callback on foreach_not_ignored_block. * host_addr: Base of area to mark * offset: Offset in the whole ram arena * length: Length of the section @@ -1122,7 +1122,7 @@ int postcopy_ram_enable_notify(MigrationIncomingState *mis) mis->have_fault_thread = true; /* Mark so that we get notified of accesses to unwritten areas */ - if (qemu_ram_foreach_migratable_block(ram_block_enable_notify, mis)) { + if (foreach_not_ignored_block(ram_block_enable_notify, mis)) { error_report("ram_block_enable_notify failed"); return -1; } diff --git a/migration/ram.c b/migration/ram.c index 59191c1ed2..01315edd66 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -159,18 +159,44 @@ out: return ret; } +static bool ramblock_is_ignored(RAMBlock *block) +{ + return !qemu_ram_is_migratable(block) || + (migrate_ignore_shared() && qemu_ram_is_shared(block)); +} + /* Should be holding either ram_list.mutex, or the RCU lock. */ +#define RAMBLOCK_FOREACH_NOT_IGNORED(block) \ + INTERNAL_RAMBLOCK_FOREACH(block) \ + if (ramblock_is_ignored(block)) {} else + #define RAMBLOCK_FOREACH_MIGRATABLE(block) \ INTERNAL_RAMBLOCK_FOREACH(block) \ if (!qemu_ram_is_migratable(block)) {} else #undef RAMBLOCK_FOREACH +int foreach_not_ignored_block(RAMBlockIterFunc func, void *opaque) +{ + RAMBlock *block; + int ret = 0; + + rcu_read_lock(); + RAMBLOCK_FOREACH_NOT_IGNORED(block) { + ret = func(block, opaque); + if (ret) { + break; + } + } + rcu_read_unlock(); + return ret; +} + static void ramblock_recv_map_init(void) { RAMBlock *rb; - RAMBLOCK_FOREACH_MIGRATABLE(rb) { + RAMBLOCK_FOREACH_NOT_IGNORED(rb) { assert(!rb->receivedmap); rb->receivedmap = bitmap_new(rb->max_length >> qemu_target_page_bits()); } @@ -1545,7 +1571,7 @@ unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb, unsigned long *bitmap = rb->bmap; unsigned long next; - if (!qemu_ram_is_migratable(rb)) { + if (ramblock_is_ignored(rb)) { return size; } @@ -1594,7 +1620,7 @@ uint64_t ram_pagesize_summary(void) RAMBlock *block; uint64_t summary = 0; - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { summary |= block->page_size; } @@ -1664,7 +1690,7 @@ static void migration_bitmap_sync(RAMState *rs) qemu_mutex_lock(&rs->bitmap_mutex); rcu_read_lock(); - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { migration_bitmap_sync_range(rs, block, 0, block->used_length); } ram_counters.remaining = ram_bytes_remaining(); @@ -2388,7 +2414,7 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss, size_t pagesize_bits = qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS; - if (!qemu_ram_is_migratable(pss->block)) { + if (ramblock_is_ignored(pss->block)) { error_report("block %s should not be migrated !", pss->block->idstr); return 0; } @@ -2486,19 +2512,30 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero) } } -uint64_t ram_bytes_total(void) +static uint64_t ram_bytes_total_common(bool count_ignored) { RAMBlock *block; uint64_t total = 0; rcu_read_lock(); - RAMBLOCK_FOREACH_MIGRATABLE(block) { - total += block->used_length; + if (count_ignored) { + RAMBLOCK_FOREACH_MIGRATABLE(block) { + total += block->used_length; + } + } else { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { + total += block->used_length; + } } rcu_read_unlock(); return total; } +uint64_t ram_bytes_total(void) +{ + return ram_bytes_total_common(false); +} + static void xbzrle_load_setup(void) { XBZRLE.decoded_buf = g_malloc(TARGET_PAGE_SIZE); @@ -2547,7 +2584,7 @@ static void ram_save_cleanup(void *opaque) */ memory_global_dirty_log_stop(); - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { g_free(block->bmap); block->bmap = NULL; g_free(block->unsentmap); @@ -2610,7 +2647,7 @@ void ram_postcopy_migrated_memory_release(MigrationState *ms) { struct RAMBlock *block; - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { unsigned long *bitmap = block->bmap; unsigned long range = block->used_length >> TARGET_PAGE_BITS; unsigned long run_start = find_next_zero_bit(bitmap, range, 0); @@ -2688,7 +2725,7 @@ static int postcopy_each_ram_send_discard(MigrationState *ms) struct RAMBlock *block; int ret; - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { PostcopyDiscardState *pds = postcopy_discard_send_init(ms, block->idstr); @@ -2896,7 +2933,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms) rs->last_sent_block = NULL; rs->last_page = 0; - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { unsigned long pages = block->used_length >> TARGET_PAGE_BITS; unsigned long *bitmap = block->bmap; unsigned long *unsentmap = block->unsentmap; @@ -3062,7 +3099,7 @@ static void ram_list_init_bitmaps(void) /* Skip setting bitmap if there is no RAM */ if (ram_bytes_total()) { - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { pages = block->max_length >> TARGET_PAGE_BITS; block->bmap = bitmap_new(pages); bitmap_set(block->bmap, 0, pages); @@ -3117,7 +3154,7 @@ static void ram_state_resume_prepare(RAMState *rs, QEMUFile *out) * about dirty page logging as well. */ - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { pages += bitmap_count_one(block->bmap, block->used_length >> TARGET_PAGE_BITS); } @@ -3176,7 +3213,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque) rcu_read_lock(); - qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE); + qemu_put_be64(f, ram_bytes_total_common(true) | RAM_SAVE_FLAG_MEM_SIZE); RAMBLOCK_FOREACH_MIGRATABLE(block) { qemu_put_byte(f, strlen(block->idstr)); @@ -3185,6 +3222,10 @@ static int ram_save_setup(QEMUFile *f, void *opaque) if (migrate_postcopy_ram() && block->page_size != qemu_host_page_size) { qemu_put_be64(f, block->page_size); } + if (migrate_ignore_shared()) { + qemu_put_be64(f, block->mr->addr); + qemu_put_byte(f, ramblock_is_ignored(block) ? 1 : 0); + } } rcu_read_unlock(); @@ -3443,7 +3484,7 @@ static inline RAMBlock *ram_block_from_stream(QEMUFile *f, int flags) return NULL; } - if (!qemu_ram_is_migratable(block)) { + if (ramblock_is_ignored(block)) { error_report("block %s should not be migrated !", id); return NULL; } @@ -3698,7 +3739,7 @@ int colo_init_ram_cache(void) RAMBlock *block; rcu_read_lock(); - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { block->colo_cache = qemu_anon_ram_alloc(block->used_length, NULL, false); @@ -3719,7 +3760,7 @@ int colo_init_ram_cache(void) if (ram_bytes_total()) { RAMBlock *block; - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { unsigned long pages = block->max_length >> TARGET_PAGE_BITS; block->bmap = bitmap_new(pages); @@ -3734,7 +3775,7 @@ int colo_init_ram_cache(void) out_locked: - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { if (block->colo_cache) { qemu_anon_ram_free(block->colo_cache, block->used_length); block->colo_cache = NULL; @@ -3751,14 +3792,14 @@ void colo_release_ram_cache(void) RAMBlock *block; memory_global_dirty_log_stop(); - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { g_free(block->bmap); block->bmap = NULL; } rcu_read_lock(); - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { if (block->colo_cache) { qemu_anon_ram_free(block->colo_cache, block->used_length); block->colo_cache = NULL; @@ -3794,7 +3835,7 @@ static int ram_load_cleanup(void *opaque) { RAMBlock *rb; - RAMBLOCK_FOREACH_MIGRATABLE(rb) { + RAMBLOCK_FOREACH_NOT_IGNORED(rb) { if (ramblock_is_pmem(rb)) { pmem_persist(rb->host, rb->used_length); } @@ -3803,7 +3844,7 @@ static int ram_load_cleanup(void *opaque) xbzrle_load_cleanup(); compress_threads_load_cleanup(); - RAMBLOCK_FOREACH_MIGRATABLE(rb) { + RAMBLOCK_FOREACH_NOT_IGNORED(rb) { g_free(rb->receivedmap); rb->receivedmap = NULL; } @@ -4003,7 +4044,7 @@ static void colo_flush_ram_cache(void) memory_global_dirty_log_sync(); rcu_read_lock(); - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { migration_bitmap_sync_range(ram_state, block, 0, block->used_length); } rcu_read_unlock(); @@ -4146,6 +4187,23 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) ret = -EINVAL; } } + if (migrate_ignore_shared()) { + hwaddr addr = qemu_get_be64(f); + bool ignored = qemu_get_byte(f); + if (ignored != ramblock_is_ignored(block)) { + error_report("RAM block %s should %s be migrated", + id, ignored ? "" : "not"); + ret = -EINVAL; + } + if (ramblock_is_ignored(block) && + block->mr->addr != addr) { + error_report("Mismatched GPAs for block %s " + "%" PRId64 "!= %" PRId64, + id, (uint64_t)addr, + (uint64_t)block->mr->addr); + ret = -EINVAL; + } + } ram_control_load_hook(f, RAM_CONTROL_BLOCK_REG, block->idstr); } else { @@ -4216,7 +4274,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) static bool ram_has_postcopy(void *opaque) { RAMBlock *rb; - RAMBLOCK_FOREACH_MIGRATABLE(rb) { + RAMBLOCK_FOREACH_NOT_IGNORED(rb) { if (ramblock_is_pmem(rb)) { info_report("Block: %s, host: %p is a nvdimm memory, postcopy" "is not supported now!", rb->idstr, rb->host); @@ -4236,7 +4294,7 @@ static int ram_dirty_bitmap_sync_all(MigrationState *s, RAMState *rs) trace_ram_dirty_bitmap_sync_start(); - RAMBLOCK_FOREACH_MIGRATABLE(block) { + RAMBLOCK_FOREACH_NOT_IGNORED(block) { qemu_savevm_send_recv_bitmap(file, block->idstr); trace_ram_dirty_bitmap_request(block->idstr); ramblock_count++; diff --git a/migration/rdma.c b/migration/rdma.c index 7eb38ee764..3cb579cc99 100644 --- a/migration/rdma.c +++ b/migration/rdma.c @@ -644,7 +644,7 @@ static int qemu_rdma_init_ram_blocks(RDMAContext *rdma) assert(rdma->blockmap == NULL); memset(local, 0, sizeof *local); - qemu_ram_foreach_migratable_block(qemu_rdma_init_one_block, rdma); + foreach_not_ignored_block(qemu_rdma_init_one_block, rdma); trace_qemu_rdma_init_ram_blocks(local->nb_blocks); rdma->dest_blocks = g_new0(RDMADestBlock, rdma->local_ram_blocks.nb_blocks); From patchwork Fri Feb 15 17:45:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Kotov X-Patchwork-Id: 10815635 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0D2D217E9 for ; Fri, 15 Feb 2019 17:49:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EAE812FB5B for ; Fri, 15 Feb 2019 17:49:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D86692FB61; Fri, 15 Feb 2019 17:49:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2B5B92FB5A for ; Fri, 15 Feb 2019 17:49:50 +0000 (UTC) Received: from localhost ([127.0.0.1]:44034 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1guhcT-0007oC-I6 for patchwork-qemu-devel@patchwork.kernel.org; Fri, 15 Feb 2019 12:49:49 -0500 Received: from eggs.gnu.org ([209.51.188.92]:44082) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1guhYz-000647-DF for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:14 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1guhYy-0002H5-2l for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:13 -0500 Received: from forwardcorp1j.cmail.yandex.net ([5.255.227.105]:55762) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1guhYx-0002GI-NL for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:12 -0500 Received: from mxbackcorp2j.mail.yandex.net (mxbackcorp2j.mail.yandex.net [IPv6:2a02:6b8:0:1619::119]) by forwardcorp1j.cmail.yandex.net (Yandex) with ESMTP id A5FF4213FD; Fri, 15 Feb 2019 20:46:10 +0300 (MSK) Received: from smtpcorp1o.mail.yandex.net (smtpcorp1o.mail.yandex.net [2a02:6b8:0:1a2d::30]) by mxbackcorp2j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id 9k4JIep42x-kAWWqwwq; Fri, 15 Feb 2019 20:46:10 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1550252770; bh=yh+17Yk5ScGMTtUGcjk78dpZ++d5SeGJ1Lwnha6n8TY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=m01HN3osr9NXp1nURcsF6M5TvEQkEy7cvSusb4RdWKCChwIM8u9O3k2TTzQN09W5u ECBQEMxo1OfVE4ObLQcLQsrG60sHENqAvF5eYH7Ea72NOmkxsSxCF4oui1ZdY1l6IS Khu4XNiWCYfmjCZIPrfslEvvtizZ0ws5w31mhktQ= Authentication-Results: mxbackcorp2j.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-red.dhcp.yndx.net (dynamic-red.dhcp.yndx.net [2a02:6b8:0:40c:e1bb:a1a7:a235:d6b4]) by smtpcorp1o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id NdnWpM22PP-k9jOMp6w; Fri, 15 Feb 2019 20:46:10 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Yury Kotov To: "Dr . David Alan Gilbert" , Eduardo Habkost , Eric Blake , Igor Mammedov , Juan Quintela , Laurent Vivier , Markus Armbruster , Paolo Bonzini , Peter Crosthwaite , Richard Henderson , Thomas Huth , qemu-devel@nongnu.org Date: Fri, 15 Feb 2019 20:45:47 +0300 Message-Id: <20190215174548.2630-5-yury-kotov@yandex-team.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190215174548.2630-1-yury-kotov@yandex-team.ru> References: <20190215174548.2630-1-yury-kotov@yandex-team.ru> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 5.255.227.105 Subject: [Qemu-devel] [PATCH v3 4/5] tests/migration-test: Add a test for ignore-shared capability X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, jiangshanlai@gmail.com, qemu-devel@lists.ewheeler.net, wrfsh@yandex-team.ru Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Yury Kotov Reviewed-by: Dr. David Alan Gilbert --- tests/migration-test.c | 131 +++++++++++++++++++++++++++++++++-------- 1 file changed, 106 insertions(+), 25 deletions(-) diff --git a/tests/migration-test.c b/tests/migration-test.c index 8352612364..dd604c4f21 100644 --- a/tests/migration-test.c +++ b/tests/migration-test.c @@ -215,10 +215,10 @@ static gchar *migrate_query_status(QTestState *who) * events suddenly appearing confuse the qmp()/hmp() responses. */ -static uint64_t get_migration_pass(QTestState *who) +static int64_t read_ram_property_int(QTestState *who, const char *property) { QDict *rsp_return, *rsp_ram; - uint64_t result; + int64_t result; rsp_return = migrate_query(who); if (!qdict_haskey(rsp_return, "ram")) { @@ -226,12 +226,17 @@ static uint64_t get_migration_pass(QTestState *who) result = 0; } else { rsp_ram = qdict_get_qdict(rsp_return, "ram"); - result = qdict_get_try_int(rsp_ram, "dirty-sync-count", 0); + result = qdict_get_try_int(rsp_ram, property, 0); } qobject_unref(rsp_return); return result; } +static uint64_t get_migration_pass(QTestState *who) +{ + return read_ram_property_int(who, "dirty-sync-count"); +} + static void read_blocktime(QTestState *who) { QDict *rsp_return; @@ -332,6 +337,13 @@ static void cleanup(const char *filename) g_free(path); } +static char *get_shmem_opts(const char *mem_size, const char *shmem_path) +{ + return g_strdup_printf("-object memory-backend-file,id=mem0,size=%s" + ",mem-path=%s,share=on -numa node,memdev=mem0", + mem_size, shmem_path); +} + static void migrate_check_parameter(QTestState *who, const char *parameter, long long value) { @@ -430,73 +442,95 @@ static void migrate_postcopy_start(QTestState *from, QTestState *to) } static int test_migrate_start(QTestState **from, QTestState **to, - const char *uri, bool hide_stderr) + const char *uri, bool hide_stderr, + bool use_shmem) { gchar *cmd_src, *cmd_dst; - char *bootpath = g_strdup_printf("%s/bootsect", tmpfs); + char *bootpath = NULL; + char *extra_opts = NULL; + char *shmem_path = NULL; const char *arch = qtest_get_arch(); const char *accel = "kvm:tcg"; - got_stop = false; + if (use_shmem) { + if (!g_file_test("/dev/shm", G_FILE_TEST_IS_DIR)) { + g_test_skip("/dev/shm is not supported"); + return -1; + } + shmem_path = g_strdup_printf("/dev/shm/qemu-%d", getpid()); + } + got_stop = false; + bootpath = g_strdup_printf("%s/bootsect", tmpfs); if (strcmp(arch, "i386") == 0 || strcmp(arch, "x86_64") == 0) { init_bootfile(bootpath, x86_bootsect); + extra_opts = use_shmem ? get_shmem_opts("150M", shmem_path) : NULL; cmd_src = g_strdup_printf("-machine accel=%s -m 150M" " -name source,debug-threads=on" " -serial file:%s/src_serial" - " -drive file=%s,format=raw", - accel, tmpfs, bootpath); + " -drive file=%s,format=raw %s", + accel, tmpfs, bootpath, + extra_opts ? extra_opts : ""); cmd_dst = g_strdup_printf("-machine accel=%s -m 150M" " -name target,debug-threads=on" " -serial file:%s/dest_serial" " -drive file=%s,format=raw" - " -incoming %s", - accel, tmpfs, bootpath, uri); + " -incoming %s %s", + accel, tmpfs, bootpath, uri, + extra_opts ? extra_opts : ""); start_address = X86_TEST_MEM_START; end_address = X86_TEST_MEM_END; } else if (g_str_equal(arch, "s390x")) { init_bootfile_s390x(bootpath); + extra_opts = use_shmem ? get_shmem_opts("128M", shmem_path) : NULL; cmd_src = g_strdup_printf("-machine accel=%s -m 128M" " -name source,debug-threads=on" - " -serial file:%s/src_serial -bios %s", - accel, tmpfs, bootpath); + " -serial file:%s/src_serial -bios %s %s", + accel, tmpfs, bootpath, + extra_opts ? extra_opts : ""); cmd_dst = g_strdup_printf("-machine accel=%s -m 128M" " -name target,debug-threads=on" " -serial file:%s/dest_serial -bios %s" - " -incoming %s", - accel, tmpfs, bootpath, uri); + " -incoming %s %s", + accel, tmpfs, bootpath, uri, + extra_opts ? extra_opts : ""); start_address = S390_TEST_MEM_START; end_address = S390_TEST_MEM_END; } else if (strcmp(arch, "ppc64") == 0) { + extra_opts = use_shmem ? get_shmem_opts("256M", shmem_path) : NULL; cmd_src = g_strdup_printf("-machine accel=%s -m 256M -nodefaults" " -name source,debug-threads=on" " -serial file:%s/src_serial" " -prom-env 'use-nvramrc?=true' -prom-env " "'nvramrc=hex .\" _\" begin %x %x " "do i c@ 1 + i c! 1000 +loop .\" B\" 0 " - "until'", accel, tmpfs, end_address, - start_address); + "until' %s", accel, tmpfs, end_address, + start_address, extra_opts ? extra_opts : ""); cmd_dst = g_strdup_printf("-machine accel=%s -m 256M" " -name target,debug-threads=on" " -serial file:%s/dest_serial" - " -incoming %s", - accel, tmpfs, uri); + " -incoming %s %s", + accel, tmpfs, uri, + extra_opts ? extra_opts : ""); start_address = PPC_TEST_MEM_START; end_address = PPC_TEST_MEM_END; } else if (strcmp(arch, "aarch64") == 0) { init_bootfile(bootpath, aarch64_kernel); + extra_opts = use_shmem ? get_shmem_opts("150M", shmem_path) : NULL; cmd_src = g_strdup_printf("-machine virt,accel=%s,gic-version=max " "-name vmsource,debug-threads=on -cpu max " "-m 150M -serial file:%s/src_serial " - "-kernel %s ", - accel, tmpfs, bootpath); + "-kernel %s %s", + accel, tmpfs, bootpath, + extra_opts ? extra_opts : ""); cmd_dst = g_strdup_printf("-machine virt,accel=%s,gic-version=max " "-name vmdest,debug-threads=on -cpu max " "-m 150M -serial file:%s/dest_serial " "-kernel %s " - "-incoming %s ", - accel, tmpfs, bootpath, uri); + "-incoming %s %s", + accel, tmpfs, bootpath, uri, + extra_opts ? extra_opts : ""); start_address = ARM_TEST_MEM_START; end_address = ARM_TEST_MEM_END; @@ -507,6 +541,7 @@ static int test_migrate_start(QTestState **from, QTestState **to, } g_free(bootpath); + g_free(extra_opts); if (hide_stderr) { gchar *tmp; @@ -524,6 +559,16 @@ static int test_migrate_start(QTestState **from, QTestState **to, *to = qtest_init(cmd_dst); g_free(cmd_dst); + + /* + * Remove shmem file immediately to avoid memory leak in test failed case. + * It's valid becase QEMU has already opened this file + */ + if (use_shmem) { + unlink(shmem_path); + g_free(shmem_path); + } + return 0; } @@ -603,7 +648,7 @@ static int migrate_postcopy_prepare(QTestState **from_ptr, char *uri = g_strdup_printf("unix:%s/migsocket", tmpfs); QTestState *from, *to; - if (test_migrate_start(&from, &to, uri, hide_error)) { + if (test_migrate_start(&from, &to, uri, hide_error, false)) { return -1; } @@ -720,7 +765,7 @@ static void test_baddest(void) char *status; bool failed; - if (test_migrate_start(&from, &to, "tcp:0:0", true)) { + if (test_migrate_start(&from, &to, "tcp:0:0", true, false)) { return; } migrate(from, "tcp:0:0", "{}"); @@ -745,7 +790,7 @@ static void test_precopy_unix(void) char *uri = g_strdup_printf("unix:%s/migsocket", tmpfs); QTestState *from, *to; - if (test_migrate_start(&from, &to, uri, false)) { + if (test_migrate_start(&from, &to, uri, false, false)) { return; } @@ -781,6 +826,41 @@ static void test_precopy_unix(void) g_free(uri); } +static void test_ignore_shared(void) +{ + char *uri = g_strdup_printf("unix:%s/migsocket", tmpfs); + QTestState *from, *to; + + if (test_migrate_start(&from, &to, uri, false, true)) { + return; + } + + migrate_set_capability(from, "x-ignore-shared", true); + migrate_set_capability(to, "x-ignore-shared", true); + + /* Wait for the first serial output from the source */ + wait_for_serial("src_serial"); + + migrate(from, uri, "{}"); + + wait_for_migration_pass(from); + + if (!got_stop) { + qtest_qmp_eventwait(from, "STOP"); + } + + qtest_qmp_eventwait(to, "RESUME"); + + wait_for_serial("dest_serial"); + wait_for_migration_complete(from); + + /* Check whether shared RAM has been really skipped */ + g_assert_cmpint(read_ram_property_int(from, "transferred"), <, 1024 * 1024); + + test_migrate_end(from, to, true); + g_free(uri); +} + int main(int argc, char **argv) { char template[] = "/tmp/migration-test-XXXXXX"; @@ -832,6 +912,7 @@ int main(int argc, char **argv) qtest_add_func("/migration/deprecated", test_deprecated); qtest_add_func("/migration/bad_dest", test_baddest); qtest_add_func("/migration/precopy/unix", test_precopy_unix); + qtest_add_func("/migration/ignore_shared", test_ignore_shared); ret = g_test_run(); From patchwork Fri Feb 15 17:45:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Kotov X-Patchwork-Id: 10815637 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A3BD613B4 for ; Fri, 15 Feb 2019 17:49:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 91FBE2FB5A for ; Fri, 15 Feb 2019 17:49:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 859D82FB61; Fri, 15 Feb 2019 17:49:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E6BDE2FB5A for ; Fri, 15 Feb 2019 17:49:50 +0000 (UTC) Received: from localhost ([127.0.0.1]:44042 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1guhcU-0008Sq-9x for patchwork-qemu-devel@patchwork.kernel.org; Fri, 15 Feb 2019 12:49:50 -0500 Received: from eggs.gnu.org ([209.51.188.92]:44093) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1guhZ0-00064H-Ax for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:15 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1guhYz-0002Hw-4A for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:14 -0500 Received: from forwardcorp1g.cmail.yandex.net ([2a02:6b8:0:1465::fd]:60965) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1guhYy-0002Gv-PV for qemu-devel@nongnu.org; Fri, 15 Feb 2019 12:46:13 -0500 Received: from mxbackcorp1o.mail.yandex.net (mxbackcorp1o.mail.yandex.net [IPv6:2a02:6b8:0:1a2d::301]) by forwardcorp1g.cmail.yandex.net (Yandex) with ESMTP id B62C9205EB; Fri, 15 Feb 2019 20:46:11 +0300 (MSK) Received: from smtpcorp1o.mail.yandex.net (smtpcorp1o.mail.yandex.net [2a02:6b8:0:1a2d::30]) by mxbackcorp1o.mail.yandex.net (nwsmtp/Yandex) with ESMTP id AzQwBzVMQP-kBAaQ43v; Fri, 15 Feb 2019 20:46:11 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1550252771; bh=AfHmKT9ibP6B3PV0Fq9g9YLZ3ZW7tTum7Z5agZkMLOs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=ZXcH69v4Kjb1J62Y8I7iHdrloIGR8tnvfj69l/Rj4K2i60eBkkAVXdNt/Zv01kdCS J0d4eAlMO4MDOyVfbUmeDULGyYEiEpa+3ennVT9zqatN50WqWg+Ia7sgw5fkOY1sx8 OQNzArPHAcRZhLF1J2JkrBHz3eHdcnXRi/wLkmW8= Authentication-Results: mxbackcorp1o.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-red.dhcp.yndx.net (dynamic-red.dhcp.yndx.net [2a02:6b8:0:40c:e1bb:a1a7:a235:d6b4]) by smtpcorp1o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id NdnWpM22PP-kAjC2Xr6; Fri, 15 Feb 2019 20:46:11 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Yury Kotov To: "Dr . David Alan Gilbert" , Eduardo Habkost , Eric Blake , Igor Mammedov , Juan Quintela , Laurent Vivier , Markus Armbruster , Paolo Bonzini , Peter Crosthwaite , Richard Henderson , Thomas Huth , qemu-devel@nongnu.org Date: Fri, 15 Feb 2019 20:45:48 +0300 Message-Id: <20190215174548.2630-6-yury-kotov@yandex-team.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190215174548.2630-1-yury-kotov@yandex-team.ru> References: <20190215174548.2630-1-yury-kotov@yandex-team.ru> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a02:6b8:0:1465::fd Subject: [Qemu-devel] [PATCH v3 5/5] migration: Add capabilities validation X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, jiangshanlai@gmail.com, qemu-devel@lists.ewheeler.net, wrfsh@yandex-team.ru Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Currently we don't check which capabilities set in the source QEMU. We just expect that the target QEMU has the same enabled capabilities. Add explicit validation for capabilities to make sure that the target VM has them too. This is enabled for only new capabilities to keep compatibily. Signed-off-by: Yury Kotov Reviewed-by: Dr. David Alan Gilbert --- migration/savevm.c | 137 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 137 insertions(+) diff --git a/migration/savevm.c b/migration/savevm.c index 322660438d..a721cf5868 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -57,6 +57,7 @@ #include "sysemu/replay.h" #include "qjson.h" #include "migration/colo.h" +#include "qemu/bitmap.h" #ifndef ETH_P_RARP #define ETH_P_RARP 0x8035 @@ -316,6 +317,8 @@ typedef struct SaveState { uint32_t len; const char *name; uint32_t target_page_bits; + uint32_t caps_count; + MigrationCapability *capabilities; } SaveState; static SaveState savevm_state = { @@ -323,15 +326,51 @@ static SaveState savevm_state = { .global_section_id = 0, }; +static bool should_validate_capability(int capability) +{ + assert(capability >= 0 && capability < MIGRATION_CAPABILITY__MAX); + /* Validate only new capabilities to keep compatibility. */ + switch (capability) { + case MIGRATION_CAPABILITY_X_IGNORE_SHARED: + return true; + default: + return false; + } +} + +static uint32_t get_validatable_capabilities_count(void) +{ + MigrationState *s = migrate_get_current(); + uint32_t result = 0; + int i; + for (i = 0; i < MIGRATION_CAPABILITY__MAX; i++) { + if (should_validate_capability(i) && s->enabled_capabilities[i]) { + result++; + } + } + return result; +} + static int configuration_pre_save(void *opaque) { SaveState *state = opaque; const char *current_name = MACHINE_GET_CLASS(current_machine)->name; + MigrationState *s = migrate_get_current(); + int i, j; state->len = strlen(current_name); state->name = current_name; state->target_page_bits = qemu_target_page_bits(); + state->caps_count = get_validatable_capabilities_count(); + state->capabilities = g_renew(MigrationCapability, state->capabilities, + state->caps_count); + for (i = j = 0; i < MIGRATION_CAPABILITY__MAX; i++) { + if (should_validate_capability(i) && s->enabled_capabilities[i]) { + state->capabilities[j++] = i; + } + } + return 0; } @@ -347,6 +386,40 @@ static int configuration_pre_load(void *opaque) return 0; } +static bool configuration_validate_capabilities(SaveState *state) +{ + bool ret = true; + MigrationState *s = migrate_get_current(); + unsigned long *source_caps_bm; + int i; + + source_caps_bm = bitmap_new(MIGRATION_CAPABILITY__MAX); + for (i = 0; i < state->caps_count; i++) { + MigrationCapability capability = state->capabilities[i]; + set_bit(capability, source_caps_bm); + } + + for (i = 0; i < MIGRATION_CAPABILITY__MAX; i++) { + bool source_state, target_state; + if (!should_validate_capability(i)) { + continue; + } + source_state = test_bit(i, source_caps_bm); + target_state = s->enabled_capabilities[i]; + if (source_state != target_state) { + error_report("Capability %s is %s, but received capability is %s", + MigrationCapability_str(i), + target_state ? "on" : "off", + source_state ? "on" : "off"); + ret = false; + /* Don't break here to report all failed capabilities */ + } + } + + g_free(source_caps_bm); + return ret; +} + static int configuration_post_load(void *opaque, int version_id) { SaveState *state = opaque; @@ -364,9 +437,53 @@ static int configuration_post_load(void *opaque, int version_id) return -EINVAL; } + if (!configuration_validate_capabilities(state)) { + return -EINVAL; + } + return 0; } +static int get_capability(QEMUFile *f, void *pv, size_t size, + const VMStateField *field) +{ + MigrationCapability *capability = pv; + char capability_str[UINT8_MAX + 1]; + uint8_t len; + int i; + + len = qemu_get_byte(f); + qemu_get_buffer(f, (uint8_t *)capability_str, len); + capability_str[len] = '\0'; + for (i = 0; i < MIGRATION_CAPABILITY__MAX; i++) { + if (!strcmp(MigrationCapability_str(i), capability_str)) { + *capability = i; + return 0; + } + } + error_report("Received unknown capability %s", capability_str); + return -EINVAL; +} + +static int put_capability(QEMUFile *f, void *pv, size_t size, + const VMStateField *field, QJSON *vmdesc) +{ + MigrationCapability *capability = pv; + const char *capability_str = MigrationCapability_str(*capability); + size_t len = strlen(capability_str); + assert(len <= UINT8_MAX); + + qemu_put_byte(f, len); + qemu_put_buffer(f, (uint8_t *)capability_str, len); + return 0; +} + +static const VMStateInfo vmstate_info_capability = { + .name = "capability", + .get = get_capability, + .put = put_capability, +}; + /* The target-page-bits subsection is present only if the * target page size is not the same as the default (ie the * minimum page size for a variable-page-size guest CPU). @@ -380,6 +497,11 @@ static bool vmstate_target_page_bits_needed(void *opaque) > qemu_target_page_bits_min(); } +static bool vmstate_capabilites_needed(void *opaque) +{ + return get_validatable_capabilities_count() > 0; +} + static const VMStateDescription vmstate_target_page_bits = { .name = "configuration/target-page-bits", .version_id = 1, @@ -391,6 +513,20 @@ static const VMStateDescription vmstate_target_page_bits = { } }; +static const VMStateDescription vmstate_capabilites = { + .name = "configuration/capabilities", + .version_id = 1, + .minimum_version_id = 1, + .needed = vmstate_capabilites_needed, + .fields = (VMStateField[]) { + VMSTATE_UINT32_V(caps_count, SaveState, 1), + VMSTATE_VARRAY_UINT32_ALLOC(capabilities, SaveState, caps_count, 1, + vmstate_info_capability, + MigrationCapability), + VMSTATE_END_OF_LIST() + } +}; + static const VMStateDescription vmstate_configuration = { .name = "configuration", .version_id = 1, @@ -404,6 +540,7 @@ static const VMStateDescription vmstate_configuration = { }, .subsections = (const VMStateDescription*[]) { &vmstate_target_page_bits, + &vmstate_capabilites, NULL } };