From patchwork Fri Mar 30 07:51:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 10317321 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C9C2D6063D for ; Fri, 30 Mar 2018 07:55:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BF03B2A565 for ; Fri, 30 Mar 2018 07:55:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B3CE82A55F; Fri, 30 Mar 2018 07:55:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,FREEMAIL_FROM,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F13142A466 for ; Fri, 30 Mar 2018 07:55:22 +0000 (UTC) Received: from localhost ([::1]:40905 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f1osc-0005g3-63 for patchwork-qemu-devel@patchwork.kernel.org; Fri, 30 Mar 2018 03:55:22 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56275) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f1ooj-0001vP-6I for qemu-devel@nongnu.org; Fri, 30 Mar 2018 03:51:22 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f1oog-0007cS-2z for qemu-devel@nongnu.org; Fri, 30 Mar 2018 03:51:21 -0400 Received: from mail-pg0-x242.google.com ([2607:f8b0:400e:c05::242]:39525) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1f1oof-0007bw-QZ for qemu-devel@nongnu.org; Fri, 30 Mar 2018 03:51:18 -0400 Received: by mail-pg0-x242.google.com with SMTP id b9so4667317pgf.6 for ; Fri, 30 Mar 2018 00:51:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=x+RwRDuZuJhN0B5SsGyzuGuKpCQoGdTNXJgZeG1BcQk=; b=gJOlFVQKeMC8CEKZ4FoSuhqOA2G1xCZGHMH/lS5n0LlV6AV06LEvcsw8GfOg8PJJjO IVYPKPDE6K19UA/Yzsvmw2cxGlP8u/+Zsb76vevH0WekIuExAM4zS9BUslEJo25A9yDX ZH+GO3APAlBnE9wI54Kx7H9AbgcG+c2gG0/CQWPQozemghMKY7AbAXvkVrtzRYeqxXdn 0ncIsuCTLWaVbb3uWqL3+BhNC7SHq3BCz4AAOPCOC4++wAVFWWKLY8/v84fxNDmMcDQ4 Qw4Jv9wVaTM+i1xDZMh4GzoR4vsjE09Ym2KuI6m89SWEDPmn7A4gvXFOWKi8bmHG8lCz kbeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=x+RwRDuZuJhN0B5SsGyzuGuKpCQoGdTNXJgZeG1BcQk=; b=BDHp4Tf4rRTc4wyb7RlEfdxynpCXZnC+cYU/h7BZuv0ILV67gm/FVcMKWAiFFTvuWF pCrJp+gwWbLu1VeAoMTbHjsYHO4x2Esdmyvh06i8kP81cwKca6QEcRVL2g+je6+m6bBU MIevqguuTdJ5Q3m5Ut3ljhI9WHs4wt4NuK7lY2mbDIRFP5SUqolJQJBOAkXDBuiXZa9Z gwOtzBG+lAKuXqW043OiICa7e7rwAwf67lgbdZNgEdjTx9Ct6rsQji/cRXvayJugf0Js 2mNMqwHQWzGqS6q2y1HtBNQbz3ACU0g0R8XjR1DlEA4xDBqYedUzUO6KXGyos5g4Ng6A 4uVA== X-Gm-Message-State: AElRT7EyumOrTwQIlPeG1GLNxFy28B/Ua+xzG69e6WV6M6VVli349uB2 01uZEZ7HLFIrG85RZVl2HVo= X-Google-Smtp-Source: AIpwx4/S2y5p6A1v+f43qHdNiyC3zaIQIG7IUBGIwQgzEuA2pzDw/BpaguXe8r6K4ioFFv6amaF2NQ== X-Received: by 10.98.144.205 with SMTP id q74mr9015562pfk.55.1522396276915; Fri, 30 Mar 2018 00:51:16 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.38]) by smtp.gmail.com with ESMTPSA id r75sm16557107pfb.98.2018.03.30.00.51.13 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 30 Mar 2018 00:51:16 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Date: Fri, 30 Mar 2018 15:51:21 +0800 Message-Id: <20180330075128.26919-4-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180330075128.26919-1-xiaoguangrong@tencent.com> References: <20180330075128.26919-1-xiaoguangrong@tencent.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400e:c05::242 Subject: [Qemu-devel] [PATCH v3 03/10] migration: stop decompression to allocate and free memory frequently X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvm@vger.kernel.org, Xiao Guangrong , qemu-devel@nongnu.org, peterx@redhat.com, dgilbert@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong Current code uses uncompress() to decompress memory which manages memory internally, that causes huge memory is allocated and freed very frequently, more worse, frequently returning memory to kernel will flush TLBs So, we maintain the memory by ourselves and reuse it for each decompression Reviewed-by: Peter Xu Reviewed-by: Jiang Biao Signed-off-by: Xiao Guangrong --- migration/ram.c | 112 +++++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 82 insertions(+), 30 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index a21514a469..fb24b2f32f 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -281,6 +281,7 @@ struct DecompressParam { void *des; uint8_t *compbuf; int len; + z_stream stream; }; typedef struct DecompressParam DecompressParam; @@ -2524,6 +2525,31 @@ void ram_handle_compressed(void *host, uint8_t ch, uint64_t size) } } +/* return the size after decompression, or negative value on error */ +static int +qemu_uncompress_data(z_stream *stream, uint8_t *dest, size_t dest_len, + const uint8_t *source, size_t source_len) +{ + int err; + + err = inflateReset(stream); + if (err != Z_OK) { + return -1; + } + + stream->avail_in = source_len; + stream->next_in = (uint8_t *)source; + stream->avail_out = dest_len; + stream->next_out = dest; + + err = inflate(stream, Z_NO_FLUSH); + if (err != Z_STREAM_END) { + return -1; + } + + return stream->total_out; +} + static void *do_data_decompress(void *opaque) { DecompressParam *param = opaque; @@ -2540,13 +2566,13 @@ static void *do_data_decompress(void *opaque) qemu_mutex_unlock(¶m->mutex); pagesize = TARGET_PAGE_SIZE; - /* uncompress() will return failed in some case, especially - * when the page is dirted when doing the compression, it's - * not a problem because the dirty page will be retransferred + /* qemu_uncompress_data() will return failed in some case, + * especially when the page is dirtied when doing the compression, + * it's not a problem because the dirty page will be retransferred * and uncompress() won't break the data in other pages. */ - uncompress((Bytef *)des, &pagesize, - (const Bytef *)param->compbuf, len); + qemu_uncompress_data(¶m->stream, des, pagesize, param->compbuf, + len); qemu_mutex_lock(&decomp_done_lock); param->done = true; @@ -2581,30 +2607,6 @@ static void wait_for_decompress_done(void) qemu_mutex_unlock(&decomp_done_lock); } -static void compress_threads_load_setup(void) -{ - int i, thread_count; - - if (!migrate_use_compression()) { - return; - } - thread_count = migrate_decompress_threads(); - decompress_threads = g_new0(QemuThread, thread_count); - decomp_param = g_new0(DecompressParam, thread_count); - qemu_mutex_init(&decomp_done_lock); - qemu_cond_init(&decomp_done_cond); - for (i = 0; i < thread_count; i++) { - qemu_mutex_init(&decomp_param[i].mutex); - qemu_cond_init(&decomp_param[i].cond); - decomp_param[i].compbuf = g_malloc0(compressBound(TARGET_PAGE_SIZE)); - decomp_param[i].done = true; - decomp_param[i].quit = false; - qemu_thread_create(decompress_threads + i, "decompress", - do_data_decompress, decomp_param + i, - QEMU_THREAD_JOINABLE); - } -} - static void compress_threads_load_cleanup(void) { int i, thread_count; @@ -2614,16 +2616,30 @@ static void compress_threads_load_cleanup(void) } thread_count = migrate_decompress_threads(); for (i = 0; i < thread_count; i++) { + /* + * we use it as a indicator which shows if the thread is + * properly init'd or not + */ + if (!decomp_param[i].compbuf) { + break; + } + qemu_mutex_lock(&decomp_param[i].mutex); decomp_param[i].quit = true; qemu_cond_signal(&decomp_param[i].cond); qemu_mutex_unlock(&decomp_param[i].mutex); } for (i = 0; i < thread_count; i++) { + if (!decomp_param[i].compbuf) { + break; + } + qemu_thread_join(decompress_threads + i); qemu_mutex_destroy(&decomp_param[i].mutex); qemu_cond_destroy(&decomp_param[i].cond); + inflateEnd(&decomp_param[i].stream); g_free(decomp_param[i].compbuf); + decomp_param[i].compbuf = NULL; } g_free(decompress_threads); g_free(decomp_param); @@ -2631,6 +2647,39 @@ static void compress_threads_load_cleanup(void) decomp_param = NULL; } +static int compress_threads_load_setup(void) +{ + int i, thread_count; + + if (!migrate_use_compression()) { + return 0; + } + + thread_count = migrate_decompress_threads(); + decompress_threads = g_new0(QemuThread, thread_count); + decomp_param = g_new0(DecompressParam, thread_count); + qemu_mutex_init(&decomp_done_lock); + qemu_cond_init(&decomp_done_cond); + for (i = 0; i < thread_count; i++) { + if (inflateInit(&decomp_param[i].stream) != Z_OK) { + goto exit; + } + + decomp_param[i].compbuf = g_malloc0(compressBound(TARGET_PAGE_SIZE)); + qemu_mutex_init(&decomp_param[i].mutex); + qemu_cond_init(&decomp_param[i].cond); + decomp_param[i].done = true; + decomp_param[i].quit = false; + qemu_thread_create(decompress_threads + i, "decompress", + do_data_decompress, decomp_param + i, + QEMU_THREAD_JOINABLE); + } + return 0; +exit: + compress_threads_load_cleanup(); + return -1; +} + static void decompress_data_with_multi_threads(QEMUFile *f, void *host, int len) { @@ -2670,8 +2719,11 @@ static void decompress_data_with_multi_threads(QEMUFile *f, */ static int ram_load_setup(QEMUFile *f, void *opaque) { + if (compress_threads_load_setup()) { + return -1; + } + xbzrle_load_setup(); - compress_threads_load_setup(); ramblock_recv_map_init(); return 0; }