Message ID | 20181016111006.629-1-xiaoguangrong@tencent.com (mailing list archive) |
---|---|
Headers | show
Return-Path: <kvm-owner@kernel.org> Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3C6CA112B for <patchwork-kvm@patchwork.kernel.org>; Tue, 16 Oct 2018 11:10:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 269282987D for <patchwork-kvm@patchwork.kernel.org>; Tue, 16 Oct 2018 11:10:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1870D29883; Tue, 16 Oct 2018 11:10:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 708CD2987D for <patchwork-kvm@patchwork.kernel.org>; Tue, 16 Oct 2018 11:10:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727114AbeJPTAL (ORCPT <rfc822;patchwork-kvm@patchwork.kernel.org>); Tue, 16 Oct 2018 15:00:11 -0400 Received: from mail-pl1-f195.google.com ([209.85.214.195]:37331 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726718AbeJPTAL (ORCPT <rfc822;kvm@vger.kernel.org>); Tue, 16 Oct 2018 15:00:11 -0400 Received: by mail-pl1-f195.google.com with SMTP id u6-v6so8124158plz.4 for <kvm@vger.kernel.org>; Tue, 16 Oct 2018 04:10:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=egtjafjGNx1bd7Aemabc38u1pXJc/hKWa8VoqOGZy84=; b=WQRoGecKOts8p/upuCQF+3lEMzVgRmvjl4p4ZcAjrA9QdKh00BXSNuNy1fQLOPP1hK CqmW8ZKsZ7SiaGw+FabT9sg/zOv3uLNbkpW+MkD2ENua5wjxeYKEwufEAzWqJ5Dv1V37 BWLD0QzYTfgXnIZPTqJUQ0ts0eeFDnRoZP7vHQzwVpRThRbXArMarrvelOWIgNAoeXS/ TwbV+nOwyUg3rOJf7WW6WGcBdj1eMB2ffaPG/RCaFm8KHwrH/mFIt9iNDgnFRLnLqNVl 5pC8h/pw3lSJJwXxmSM7VgVhdf4J+OWpbiN+fIeIuAajEfxUBh8+yN8gwaZkefUwFBqD wLYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=egtjafjGNx1bd7Aemabc38u1pXJc/hKWa8VoqOGZy84=; b=ly7GesFc7jzLpsI8EonY/vTBQVq9oli7HcnnBdumqL3R7bBdmIY91DAbB32e7zfqsY qUIgETyh8ol9b35StfZj83JIYD3FrBvu6b769d+YBnd3EB/RAgKkGuy6H4E3rfYi9MAx 9ov0VvhZudnUFdmYfsxgbnWXZVa6jLYpS9JN3Zsb+R5EKOot4NTUw5wGB5+SHVEBpliS nT92O5qVVYzOHTb48Ib1M4IQGfPjhkpmDgBXrU2Mbm/eVQu4Vz96aUN06FPwolq19XPn 1rg2ikYlVqQrjUdFLL4NSpBk/ALzBrlIk2M83zVxnFbI9tJGvgTfMW8G3JQS/8K5MHiH nOZg== X-Gm-Message-State: ABuFfogPSJ8cJQek/obzksDHjuGQ4Mg8D7coq2xbxEYsoGl5GxSz8ekN HNUuM9opoMVYz3VC3OmqggU= X-Google-Smtp-Source: ACcGV61MYLoeUci/nO6rBgG9ImIZmeZf7TZzZL++OGn7B6ZComONUUIfAuQ/Ih1eC4M4dtMmf+wJWA== X-Received: by 2002:a17:902:720b:: with SMTP id ba11-v6mr20778312plb.199.1539688215962; Tue, 16 Oct 2018 04:10:15 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.40]) by smtp.gmail.com with ESMTPSA id p24-v6sm18054927pgm.70.2018.10.16.04.10.12 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 16 Oct 2018 04:10:15 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com, peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, eblake@redhat.com, quintela@redhat.com, Xiao Guangrong <xiaoguangrong@tencent.com> Subject: [PATCH 0/4] migration: improve multithreads Date: Tue, 16 Oct 2018 19:10:02 +0800 Message-Id: <20181016111006.629-1-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: <kvm.vger.kernel.org> X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP |
Series | migration: improve multithreads | expand |
From: Xiao Guangrong <xiaoguangrong@tencent.com> This is the last part of our previous work: https://lists.gnu.org/archive/html/qemu-devel/2018-06/msg00526.html This part finally improves the multithreads model used by compression and decompression, that makes the compression feature is really usable in the production. Comparing with the previous version, we 1. port ptr_ring from linux kernel and use it to instead of lockless ring designed by ourself ( Michael, i added myself to the list of author in that file, if you dislike it, i'm fine to drop it. :) ) 2 search all threads to detect if it has free room in its local ring to contain a request instead of RR to reduce busy-ratio Background ---------- Current implementation of compression and decompression are very hard to be enabled on productions. We noticed that too many wait-wakes go to kernel space and CPU usages are very low even if the system is really free The reasons are: 1) there are two many locks used to do synchronous,there is a global lock and each single thread has its own lock, migration thread and work threads need to go to sleep if these locks are busy 2) migration thread separately submits request to the thread however, only one request can be pended, that means, the thread has to go to sleep after finishing the request Our Ideas --------- To make it work better, we introduce a lockless multithread model, the user, currently it is the migration thread, submits request to each thread which has its own ring whose capacity is 4 and puts the result to a global ring where the user fetches result out and do remaining operations for the request, e.g, posting the compressed data out for migration on the source QEMU Performance Result ----------------- We tested live migration on two hosts: Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz * 64 + 256G memory to migration a VM between each other, which has 16 vCPUs and 120G memory, during the migration, multiple threads are repeatedly writing the memory in the VM We used 16 threads on the destination to decompress the data and on the source, we tried 4, 8 and 16 threads to compress the data 1) 4 threads, compress-wait-thread = off CPU usages main thread compression threads ----------------------------------------------- before 66.2 32.4~36.8 after 56.5 59.4~60.9 Migration result total time busy-ratio -------------------------------------------------- before 247371 0.54 after 138326 0.55 2) 4 threads, compress-wait-thread = on CPU usages main thread compression threads ----------------------------------------------- before 55.1 51.0~63.3 after 99.9 99.9 Migration result total time busy-ratio -------------------------------------------------- before CAN'T COMPLETE 0 after 338692 0 3) 8 threads, compress-wait-thread = off CPU usages main thread compression threads ----------------------------------------------- before 43.3 17.5~32.5 after 54.5 54.5~56.8 Migration result total time busy-ratio -------------------------------------------------- before 427384 0.19 after 125066 0.38 4) 8 threads, compress-wait-thread = on CPU usages main thread compression threads ----------------------------------------------- before 96.3 2.3~46.8 after 90.6 90.6~91.8 Migration result total time busy-ratio -------------------------------------------------- before CAN'T COMPLETE 0 after 164426 0 5) 16 threads, compress-wait-thread = off CPU usages main thread compression threads ----------------------------------------------- before 56.2 6.2~56.2 after 37.8 37.8~40.2 Migration result total time busy-ratio -------------------------------------------------- before 2317123 0.02 after 149006 0.02 5) 16 threads, compress-wait-thread = on CPU usages main thread compression threads ----------------------------------------------- before 48.3 1.7~31.0 after 43.9 42.1~45.6 Migration result total time busy-ratio -------------------------------------------------- before 1792817 0.00 after 161423 0.00 Xiao Guangrong (4): ptr_ring: port ptr_ring from linux kernel to QEMU migration: introduce lockless multithreads model migration: use lockless Multithread model for compression migration: use lockless Multithread model for decompression include/qemu/lockless-threads.h | 63 +++++ include/qemu/ptr_ring.h | 235 ++++++++++++++++++ migration/ram.c | 535 +++++++++++++++------------------------- util/Makefile.objs | 1 + util/lockless-threads.c | 373 ++++++++++++++++++++++++++++ 5 files changed, 865 insertions(+), 342 deletions(-) create mode 100644 include/qemu/lockless-threads.h create mode 100644 include/qemu/ptr_ring.h create mode 100644 util/lockless-threads.c