From patchwork Wed Feb 22 03:42:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhanghailiang X-Patchwork-Id: 9586115 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 21BA460578 for ; Wed, 22 Feb 2017 03:43:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0413728657 for ; Wed, 22 Feb 2017 03:43:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ECC332867E; Wed, 22 Feb 2017 03:43:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E44A528658 for ; Wed, 22 Feb 2017 03:43:32 +0000 (UTC) Received: from localhost ([::1]:49637 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cgNpx-0004in-N0 for patchwork-qemu-devel@patchwork.kernel.org; Tue, 21 Feb 2017 22:43:29 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42569) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cgNpN-0004i2-If for qemu-devel@nongnu.org; Tue, 21 Feb 2017 22:42:55 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cgNpJ-0002tT-EW for qemu-devel@nongnu.org; Tue, 21 Feb 2017 22:42:53 -0500 Received: from [45.249.212.189] (port=2400 helo=dggrg03-dlp.huawei.com) by eggs.gnu.org with esmtps (TLS1.0:RSA_ARCFOUR_SHA1:16) (Exim 4.71) (envelope-from ) id 1cgNpI-0002pi-Qj for qemu-devel@nongnu.org; Tue, 21 Feb 2017 22:42:49 -0500 Received: from 172.30.72.53 (EHLO DGGEMM403-HUB.china.huawei.com) ([172.30.72.53]) by dggrg03-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AIX52743; Wed, 22 Feb 2017 11:42:43 +0800 (CST) Received: from DGGEML401-HUB.china.huawei.com (10.3.17.32) by DGGEMM403-HUB.china.huawei.com (10.3.20.211) with Microsoft SMTP Server (TLS) id 14.3.301.0; Wed, 22 Feb 2017 11:42:40 +0800 Received: from localhost (10.177.24.212) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Wed, 22 Feb 2017 11:42:31 +0800 From: zhanghailiang To: , , Date: Wed, 22 Feb 2017 11:42:10 +0800 Message-ID: <1487734936-43472-10-git-send-email-zhang.zhanghailiang@huawei.com> X-Mailer: git-send-email 2.7.2.windows.1 In-Reply-To: <1487734936-43472-1-git-send-email-zhang.zhanghailiang@huawei.com> References: <1487734936-43472-1-git-send-email-zhang.zhanghailiang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.24.212] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020201.58AD08B3.00B6, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 284190f6ab331e9f6d011a955b26e4a0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x-2.6.x [generic] [fuzzy] X-Received-From: 45.249.212.189 Subject: [Qemu-devel] [PATCH 09/15] COLO: Flush PVM's cached RAM into SVM's memory X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiecl.fnst@cn.fujitsu.com, zhanghailiang , lizhijian@cn.fujitsu.com, Juan Quintela Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP During the time of VM's running, PVM may dirty some pages, we will transfer PVM's dirty pages to SVM and store them into SVM's RAM cache at next checkpoint time. So, the content of SVM's RAM cache will always be same with PVM's memory after checkpoint. Instead of flushing all content of PVM's RAM cache into SVM's MEMORY, we do this in a more efficient way: Only flush any page that dirtied by PVM since last checkpoint. In this way, we can ensure SVM's memory same with PVM's. Besides, we must ensure flush RAM cache before load device state. Cc: Juan Quintela Signed-off-by: zhanghailiang Signed-off-by: Li Zhijian Reviewed-by: Dr. David Alan Gilbert --- include/migration/migration.h | 1 + migration/ram.c | 41 +++++++++++++++++++++++++++++++++++++++++ migration/trace-events | 2 ++ 3 files changed, 44 insertions(+) diff --git a/include/migration/migration.h b/include/migration/migration.h index 93c6148..ba5b97b 100644 --- a/include/migration/migration.h +++ b/include/migration/migration.h @@ -383,4 +383,5 @@ PostcopyState postcopy_state_set(PostcopyState new_state); /* ram cache */ int colo_init_ram_cache(void); void colo_release_ram_cache(void); +void colo_flush_ram_cache(void); #endif diff --git a/migration/ram.c b/migration/ram.c index ed3b606..3f57fe0 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -2540,6 +2540,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) * be atomic */ bool postcopy_running = postcopy_state_get() >= POSTCOPY_INCOMING_LISTENING; + bool need_flush = false; seq_iter++; @@ -2574,6 +2575,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) /* After going into COLO, we should load the Page into colo_cache */ if (ram_cache_enable) { host = colo_cache_from_block_offset(block, addr); + need_flush = true; } else { host = host_from_ram_block_offset(block, addr); } @@ -2668,6 +2670,10 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) wait_for_decompress_done(); rcu_read_unlock(); trace_ram_load_complete(ret, seq_iter); + + if (!ret && ram_cache_enable && need_flush) { + colo_flush_ram_cache(); + } return ret; } @@ -2738,6 +2744,41 @@ void colo_release_ram_cache(void) rcu_read_unlock(); } +/* + * Flush content of RAM cache into SVM's memory. + * Only flush the pages that be dirtied by PVM or SVM or both. + */ +void colo_flush_ram_cache(void) +{ + RAMBlock *block = NULL; + void *dst_host; + void *src_host; + ram_addr_t offset = 0; + + trace_colo_flush_ram_cache_begin(migration_dirty_pages); + rcu_read_lock(); + block = QLIST_FIRST_RCU(&ram_list.blocks); + + while (block) { + ram_addr_t ram_addr_abs; + offset = migration_bitmap_find_dirty(block, offset, &ram_addr_abs); + migration_bitmap_clear_dirty(ram_addr_abs); + + if (offset >= block->used_length) { + offset = 0; + block = QLIST_NEXT_RCU(block, next); + } else { + dst_host = block->host + offset; + src_host = block->colo_cache + offset; + memcpy(dst_host, src_host, TARGET_PAGE_SIZE); + } + } + + rcu_read_unlock(); + trace_colo_flush_ram_cache_end(); + assert(migration_dirty_pages == 0); +} + static SaveVMHandlers savevm_ram_handlers = { .save_live_setup = ram_save_setup, .save_live_iterate = ram_save_iterate, diff --git a/migration/trace-events b/migration/trace-events index fa660e3..5d4cf80 100644 --- a/migration/trace-events +++ b/migration/trace-events @@ -71,6 +71,8 @@ migration_throttle(void) "" ram_load_postcopy_loop(uint64_t addr, int flags) "@%" PRIx64 " %x" ram_postcopy_send_discard_bitmap(void) "" ram_save_queue_pages(const char *rbname, size_t start, size_t len) "%s: start: %zx len: %zx" +colo_flush_ram_cache_begin(uint64_t dirty_pages) "dirty_pages %" PRIu64 +colo_flush_ram_cache_end(void) "" # migration/migration.c await_return_path_close_on_source_close(void) ""