From patchwork Fri Jan 11 06:37:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10757469 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5DAB01399 for ; Fri, 11 Jan 2019 06:37:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D03529AF6 for ; Fri, 11 Jan 2019 06:37:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3EE8A29B08; Fri, 11 Jan 2019 06:37:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9902229AF6 for ; Fri, 11 Jan 2019 06:37:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730999AbfAKGhx (ORCPT ); Fri, 11 Jan 2019 01:37:53 -0500 Received: from mail-pf1-f182.google.com ([209.85.210.182]:42696 "EHLO mail-pf1-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730930AbfAKGhx (ORCPT ); Fri, 11 Jan 2019 01:37:53 -0500 Received: by mail-pf1-f182.google.com with SMTP id 64so6484898pfr.9 for ; Thu, 10 Jan 2019 22:37:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LmgMEMti+z/l1IMU5qcc3+pH5l7mn1xGFRW4iyILREA=; b=TAAONjqCOLuD0+E5Ao/svOsR2IBEs3yUgnoersZhXQZU8zvfbFDSzFqoYKHfD2AC0n hi2ciNegrOlsq4Mtgta0Yyfs54mYFR9gTUgdYU0yPwIeOZCEkMPdJalISioUJrCxVp3t kOA9mLhiSNsHFaZMKpRnKOXofrnUsy4/9iW924VrNCe15nHxyJ6z3O3vWhiUrDThNJ8f mCYEZtj5xaGiYYC8Fx/pPwHfXpCn7YUMylv++LBQjZpZJVc8zz2hS6mH0ITZ58IRgh9q iBHT4I6JK1gGljfuvUkuh8ZVqdx/L8yGnoGLVGxYC5Z9DT2sm0J9fwTtVHV7nOI+OIDg nggg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LmgMEMti+z/l1IMU5qcc3+pH5l7mn1xGFRW4iyILREA=; b=l9GyX3FpZJnQfr6jRj5TDd8B8caK6ppbGMF/qQP33/2ad2k6mwRLNSM9fxJr9jaI6E czs+KgW9ut0SdZoaw/iDalMhsG082dQMKmF9FdrVj0P3WWXvLZxO41WMN2QgSxbfZM6A GmOGn7Bt03JenY1fTeLRlpLBtAm7UaWAmuYizX5zxFF3lpoQ+2KTArZzbPfwRG/wzuiY n+mf50q5hCkKy0OgugLWBMuLSKHp9toq+UpBTKSQCxdqWLFH5J6kAzZL2er1df6yVYZH Ub88pbBovjWUyj3yJepeJslBJcohQhgBorU/H2r6x60PyjDhBjts/FbyvYVFD6FfJU4Z uwbQ== X-Gm-Message-State: AJcUukepgcHoJTVfKACwEUSjbTco/94uL2MSLi/nn1HUbxR9Czn8LQ1N i4QQcOEMEUq6+qVMiDbe3Yk= X-Google-Smtp-Source: ALg8bN4J4MYJLbz8KUzD94oX1guKoUicQny9e6bYQWAnF6u7SF4/WqF8Oqsmo73CZRWHqiNXGP2iyQ== X-Received: by 2002:a63:de04:: with SMTP id f4mr9934252pgg.292.1547188672170; Thu, 10 Jan 2019 22:37:52 -0800 (PST) Received: from localhost.localdomain ([203.205.141.36]) by smtp.gmail.com with ESMTPSA id 78sm141460933pft.184.2019.01.10.22.37.48 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 10 Jan 2019 22:37:51 -0800 (PST) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, eblake@redhat.com, quintela@redhat.com, cota@braap.org, Xiao Guangrong Subject: [PATCH v2 1/3] migration: introduce pages-per-second Date: Fri, 11 Jan 2019 14:37:30 +0800 Message-Id: <20190111063732.10484-2-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190111063732.10484-1-xiaoguangrong@tencent.com> References: <20190111063732.10484-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong It introduces a new statistic, pages-per-second, as bandwidth or mbps is not enough to measure the performance of posting pages out as we have compression, xbzrle, which can significantly reduce the amount of the data size, instead, pages-per-second is the one we want Signed-off-by: Xiao Guangrong Reviewed-by: Dr. David Alan Gilbert --- hmp.c | 2 ++ migration/migration.c | 11 ++++++++++- migration/migration.h | 8 ++++++++ migration/ram.c | 6 ++++++ qapi/migration.json | 5 ++++- 5 files changed, 30 insertions(+), 2 deletions(-) diff --git a/hmp.c b/hmp.c index 80aa5ab504..944e3e072d 100644 --- a/hmp.c +++ b/hmp.c @@ -236,6 +236,8 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict) info->ram->page_size >> 10); monitor_printf(mon, "multifd bytes: %" PRIu64 " kbytes\n", info->ram->multifd_bytes >> 10); + monitor_printf(mon, "pages-per-second: %" PRIu64 "\n", + info->ram->pages_per_second); if (info->ram->dirty_pages_rate) { monitor_printf(mon, "dirty pages rate: %" PRIu64 " pages\n", diff --git a/migration/migration.c b/migration/migration.c index ffc4d9e556..a82d594f29 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -777,6 +777,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s) info->ram->postcopy_requests = ram_counters.postcopy_requests; info->ram->page_size = qemu_target_page_size(); info->ram->multifd_bytes = ram_counters.multifd_bytes; + info->ram->pages_per_second = s->pages_per_second; if (migrate_use_xbzrle()) { info->has_xbzrle_cache = true; @@ -1563,6 +1564,7 @@ void migrate_init(MigrationState *s) s->rp_state.from_dst_file = NULL; s->rp_state.error = false; s->mbps = 0.0; + s->pages_per_second = 0.0; s->downtime = 0; s->expected_downtime = 0; s->setup_time = 0; @@ -2881,7 +2883,7 @@ static void migration_calculate_complete(MigrationState *s) static void migration_update_counters(MigrationState *s, int64_t current_time) { - uint64_t transferred, time_spent; + uint64_t transferred, transferred_pages, time_spent; uint64_t current_bytes; /* bytes transferred since the beginning */ double bandwidth; @@ -2898,6 +2900,11 @@ static void migration_update_counters(MigrationState *s, s->mbps = (((double) transferred * 8.0) / ((double) time_spent / 1000.0)) / 1000.0 / 1000.0; + transferred_pages = ram_get_total_transferred_pages() - + s->iteration_initial_pages; + s->pages_per_second = (double) transferred_pages / + (((double) time_spent / 1000.0)); + /* * if we haven't sent anything, we don't want to * recalculate. 10000 is a small enough number for our purposes @@ -2910,6 +2917,7 @@ static void migration_update_counters(MigrationState *s, s->iteration_start_time = current_time; s->iteration_initial_bytes = current_bytes; + s->iteration_initial_pages = ram_get_total_transferred_pages(); trace_migrate_transferred(transferred, time_spent, bandwidth, s->threshold_size); @@ -3314,6 +3322,7 @@ static void migration_instance_init(Object *obj) ms->state = MIGRATION_STATUS_NONE; ms->mbps = -1; + ms->pages_per_second = -1; qemu_sem_init(&ms->pause_sem, 0); qemu_mutex_init(&ms->error_mutex); diff --git a/migration/migration.h b/migration/migration.h index e413d4d8b6..810effc384 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -126,6 +126,12 @@ struct MigrationState */ QemuSemaphore rate_limit_sem; + /* pages already send at the beggining of current interation */ + uint64_t iteration_initial_pages; + + /* pages transferred per second */ + double pages_per_second; + /* bytes already send at the beggining of current interation */ uint64_t iteration_initial_bytes; /* time at the start of current iteration */ @@ -271,6 +277,8 @@ bool migrate_use_block_incremental(void); int migrate_max_cpu_throttle(void); bool migrate_use_return_path(void); +uint64_t ram_get_total_transferred_pages(void); + bool migrate_use_compression(void); int migrate_compress_level(void); int migrate_compress_threads(void); diff --git a/migration/ram.c b/migration/ram.c index 7e7deec4d8..7e429b0502 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1593,6 +1593,12 @@ uint64_t ram_pagesize_summary(void) return summary; } +uint64_t ram_get_total_transferred_pages(void) +{ + return ram_counters.normal + ram_counters.duplicate + + compression_counters.pages + xbzrle_counters.pages; +} + static void migration_update_rates(RAMState *rs, int64_t end_time) { uint64_t page_count = rs->target_page_count - rs->target_page_count_prev; diff --git a/qapi/migration.json b/qapi/migration.json index 31b589ec26..c5babd03b0 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -41,6 +41,9 @@ # # @multifd-bytes: The number of bytes sent through multifd (since 3.0) # +# @pages-per-second: the number of memory pages transferred per second +# (Since 3.2) +# # Since: 0.14.0 ## { 'struct': 'MigrationStats', @@ -49,7 +52,7 @@ 'normal-bytes': 'int', 'dirty-pages-rate' : 'int', 'mbps' : 'number', 'dirty-sync-count' : 'int', 'postcopy-requests' : 'int', 'page-size' : 'int', - 'multifd-bytes' : 'uint64' } } + 'multifd-bytes' : 'uint64', 'pages-per-second' : 'uint64' } } ## # @XBZRLECacheStats: