From patchwork Fri Oct 21 06:48:24 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liang Li X-Patchwork-Id: 9388133 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 763D8608A7 for ; Fri, 21 Oct 2016 07:01:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 681FC29E3C for ; Fri, 21 Oct 2016 07:01:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5CEA229E45; Fri, 21 Oct 2016 07:01:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DD17E29E40 for ; Fri, 21 Oct 2016 07:01:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754791AbcJUHBP (ORCPT ); Fri, 21 Oct 2016 03:01:15 -0400 Received: from mga11.intel.com ([192.55.52.93]:63945 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754673AbcJUHBD (ORCPT ); Fri, 21 Oct 2016 03:01:03 -0400 Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP; 21 Oct 2016 00:01:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,375,1473145200"; d="scan'208";a="182206649" Received: from ll.sh.intel.com (HELO localhost) ([10.239.13.123]) by fmsmga004.fm.intel.com with ESMTP; 21 Oct 2016 00:01:00 -0700 From: Liang Li To: qemu-devel@nongnu.org Cc: mst@redhat.com, pbonzini@redhat.com, quintela@redhat.com, amit.shah@redhat.com, kvm@vger.kernel.org, dgilbert@redhat.com, thuth@redhat.com, virtio-dev@lists.oasis-open.org, dave.hansen@intel.com, Liang Li Subject: [PATCH qemu v3 6/6] migration: skip free pages during live migration Date: Fri, 21 Oct 2016 14:48:24 +0800 Message-Id: <1477032504-12745-7-git-send-email-liang.z.li@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1477032504-12745-1-git-send-email-liang.z.li@intel.com> References: <1477032504-12745-1-git-send-email-liang.z.li@intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP After sending out the request for free pages, live migration process will start without waiting for the free page bitmap is ready. If the free page bitmap is not ready when doing the 1st migration_bitmap_sync() after ram_save_setup(), the free page bitmap will be ignored, this means the free pages will not be filtered out in this case. The current implementation can not work with post copy, if post copy is enabled, we simply ignore the free pages. Will make it work later. Signed-off-by: Liang Li --- migration/ram.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 86 insertions(+) diff --git a/migration/ram.c b/migration/ram.c index bc6154f..00ce97e 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -43,6 +43,8 @@ #include "trace.h" #include "exec/ram_addr.h" #include "qemu/rcu_queue.h" +#include "sysemu/balloon.h" +#include "sysemu/kvm.h" #ifdef DEBUG_MIGRATION_RAM #define DPRINTF(fmt, ...) \ @@ -228,6 +230,8 @@ static QemuMutex migration_bitmap_mutex; static uint64_t migration_dirty_pages; static uint32_t last_version; static bool ram_bulk_stage; +static bool ignore_freepage_rsp; +static uint64_t free_page_req_id; /* used by the search for pages to send */ struct PageSearchStatus { @@ -244,6 +248,7 @@ static struct BitmapRcu { struct rcu_head rcu; /* Main migration bitmap */ unsigned long *bmap; + unsigned long *free_page_bmap; /* bitmap of pages that haven't been sent even once * only maintained and used in postcopy at the moment * where it's used to send the dirtymap at the start @@ -636,6 +641,7 @@ static void migration_bitmap_sync(void) rcu_read_unlock(); qemu_mutex_unlock(&migration_bitmap_mutex); + ignore_freepage_rsp = true; trace_migration_bitmap_sync_end(migration_dirty_pages - num_dirty_pages_init); num_dirty_pages_period += migration_dirty_pages - num_dirty_pages_init; @@ -1411,6 +1417,7 @@ static void migration_bitmap_free(struct BitmapRcu *bmap) { g_free(bmap->bmap); g_free(bmap->unsentmap); + g_free(bmap->free_page_bmap); g_free(bmap); } @@ -1481,6 +1488,77 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new) } } +static void filter_out_guest_free_page(unsigned long *free_page_bmap, + long nbits) +{ + long i, page_count = 0, len; + unsigned long *bitmap; + + tighten_guest_free_page_bmap(free_page_bmap); + qemu_mutex_lock(&migration_bitmap_mutex); + bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap; + slow_bitmap_complement(bitmap, free_page_bmap, nbits); + + len = (last_ram_offset() >> TARGET_PAGE_BITS) / BITS_PER_LONG; + for (i = 0; i < len; i++) { + page_count += hweight_long(bitmap[i]); + } + + migration_dirty_pages = page_count; + qemu_mutex_unlock(&migration_bitmap_mutex); +} + +static void ram_request_free_page(unsigned long *bmap, unsigned long max_pfn) +{ + BalloonReqStatus status; + + free_page_req_id++; + status = balloon_get_free_pages(bmap, max_pfn / BITS_PER_BYTE, + free_page_req_id); + if (status == REQ_START) { + ignore_freepage_rsp = false; + } +} + +static void ram_handle_free_page(void) +{ + unsigned long nbits, req_id = 0; + RAMBlock *pc_ram_block; + BalloonReqStatus status; + + status = balloon_free_page_ready(&req_id); + switch (status) { + case REQ_DONE: + if (req_id != free_page_req_id) { + return; + } + rcu_read_lock(); + pc_ram_block = QLIST_FIRST_RCU(&ram_list.blocks); + nbits = pc_ram_block->used_length >> TARGET_PAGE_BITS; + filter_out_guest_free_page(migration_bitmap_rcu->free_page_bmap, nbits); + rcu_read_unlock(); + + qemu_mutex_lock_iothread(); + migration_bitmap_sync(); + qemu_mutex_unlock_iothread(); + /* + * bulk stage assumes in (migration_bitmap_find_and_reset_dirty) that + * every page is dirty, that's no longer ture at this point. + */ + ram_bulk_stage = false; + last_seen_block = NULL; + last_sent_block = NULL; + last_offset = 0; + break; + case REQ_ERROR: + ignore_freepage_rsp = true; + error_report("failed to get free page"); + break; + default: + break; + } +} + /* * 'expected' is the value you expect the bitmap mostly to be full * of; it won't bother printing lines that are all this value. @@ -1946,6 +2024,11 @@ static int ram_save_setup(QEMUFile *f, void *opaque) qemu_mutex_unlock_ramlist(); qemu_mutex_unlock_iothread(); + if (balloon_free_pages_support() && !migrate_postcopy_ram()) { + unsigned long max_pfn = get_guest_max_pfn(); + migration_bitmap_rcu->free_page_bmap = bitmap_new(max_pfn); + ram_request_free_page(migration_bitmap_rcu->free_page_bmap, max_pfn); + } qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE); QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { @@ -1986,6 +2069,9 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) while ((ret = qemu_file_rate_limit(f)) == 0) { int pages; + if (!ignore_freepage_rsp) { + ram_handle_free_page(); + } pages = ram_find_and_save_block(f, false, &bytes_transferred); /* no more pages to sent */ if (pages == 0) {