From patchwork Tue Jan 26 12:44:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12046361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45C4DC433E0 for ; Tue, 26 Jan 2021 12:47:57 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 99CD523104 for ; Tue, 26 Jan 2021 12:47:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 99CD523104 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:To:From: Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender :Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=HX7yXkzsn9KyvYGkw2JR3hQK8fNviefOZT1lr4vGaA0=; b=RQUA7gkkXa+4wIdDVXM1g5xte/ HW8XKslkJl2yD+ZCKvPxiVZsl8uMOUv7y2nU6yG0KesMg3SHYcBx4IFITaCrCnvExNON4ji40fGBk KwU+LC+w2WwjcLkbVx/MHD92QLGkleSkRGeY95manCwsSU0dSQHhAAwxJXakBxkaOx0+23qDnat8u RHtdXGPA+K7XgQPM+9OaEslPQxQJMEpgIr+p2l9B+XB43VeMikC9DT3NuB+NaJovd37se23pUnopu 9Cyaj4tMcdHHcUAWBZMiQG069B+eYAnLB6heA/MKtSp4ngMXl6VjrTXGsRRx1nLccbZs6BtS/MOZC mCYSl3wg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l4Nil-0000S2-KT; Tue, 26 Jan 2021 12:45:23 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l4Nig-0000Q0-TD for linux-arm-kernel@lists.infradead.org; Tue, 26 Jan 2021 12:45:20 +0000 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DQ5yR2C4WzjDdv; Tue, 26 Jan 2021 20:43:59 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.498.0; Tue, 26 Jan 2021 20:45:01 +0800 From: Keqian Zhu To: , , , , Marc Zyngier , Will Deacon , Catalin Marinas Subject: [RFC PATCH 0/7] kvm: arm64: Implement SW/HW combined dirty log Date: Tue, 26 Jan 2021 20:44:37 +0800 Message-ID: <20210126124444.27136-1-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210126_074519_434735_5CCEA572 X-CRM114-Status: GOOD ( 13.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , yubihong@huawei.com, jiangkunkun@huawei.com, Suzuki K Poulose , Cornelia Huck , Kirti Wankhede , xiexiangyou@huawei.com, zhengchuan@huawei.com, Alex Williamson , James Morse , wanghaibin.wang@huawei.com, Robin Murphy Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The intention: On arm64 platform, we tracking dirty log of vCPU through guest memory abort. KVM occupys some vCPU time of guest to change stage2 mapping and mark dirty. This leads to heavy side effect on VM, especially when multi vCPU race and some of them block on kvm mmu_lock. DBM is a HW auxiliary approach to log dirty. MMU chages PTE to be writable if its DBM bit is set. Then KVM doesn't occupy vCPU time to log dirty. About this patch series: The biggest problem of apply DBM for stage2 is that software must scan PTs to collect dirty state, which may cost much time and affect downtime of migration. This series realize a SW/HW combined dirty log that can effectively solve this problem (The smmu side can also use this approach to solve dma dirty log tracking). The core idea is that we do not enable hardware dirty at start (do not add DBM bit). When a arbitrary PT occurs fault, we execute soft tracking for this PT and enable hardware tracking for its *nearby* PTs (e.g. Add DBM bit for nearby 16PTs). Then when sync dirty log, we have known all PTs with hardware dirty enabled, so we do not need to scan all PTs. mem abort point mem abort point ↓ ↓ --------------------------------------------------------------- |********| | |********| | | --------------------------------------------------------------- ↑ ↑ set DBM bit of set DBM bit of this PT section (64PTEs) this PT section (64PTEs) We may worry that when dirty rate is over-high we still need to scan too much PTs. We mainly concern the VM stop time. With Qemu dirty rate throttling, the dirty memory is closing to the VM stop threshold, so there is a little PTs to scan after VM stop. It has the advantages of hardware tracking that minimizes side effect on vCPU, and also has the advantages of software tracking that controls vCPU dirty rate. Moreover, software tracking helps us to scan PTs at some fixed points, which greatly reduces scanning time. And the biggest benefit is that we can apply this solution for dma dirty tracking. Test: Host: Kunpeng 920 with 128 CPU 512G RAM. Disable Transparent Hugepage (Ensure test result is not effected by dissolve of block page table at the early stage of migration). VM: 16 CPU 16GB RAM. Run 4 pair of (redis_benchmark+redis_server). Each run 5 times for software dirty log and SW/HW conbined dirty log. Test result: Gain 5%~7% improvement of redis QPS during VM migration. VM downtime is not affected fundamentally. About 56.7% of DBM is effectively used. Keqian Zhu (7): arm64: cpufeature: Add API to report system support of HWDBM kvm: arm64: Use atomic operation when update PTE kvm: arm64: Add level_apply parameter for stage2_attr_walker kvm: arm64: Add some HW_DBM related pgtable interfaces kvm: arm64: Add some HW_DBM related mmu interfaces kvm: arm64: Only write protect selected PTE kvm: arm64: Start up SW/HW combined dirty log arch/arm64/include/asm/cpufeature.h | 12 +++ arch/arm64/include/asm/kvm_host.h | 6 ++ arch/arm64/include/asm/kvm_mmu.h | 7 ++ arch/arm64/include/asm/kvm_pgtable.h | 45 ++++++++++ arch/arm64/kvm/arm.c | 125 ++++++++++++++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 130 ++++++++++++++++++++++----- arch/arm64/kvm/mmu.c | 47 +++++++++- arch/arm64/kvm/reset.c | 8 +- 8 files changed, 351 insertions(+), 29 deletions(-)