From patchwork Mon Jun 1 04:02:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11581217 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D1EA814F6 for ; Mon, 1 Jun 2020 04:04:32 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A5AC6206E2 for ; Mon, 1 Jun 2020 04:04:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A5AC6206E2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:33720 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jfbgd-0006a5-Rw for patchwork-qemu-devel@patchwork.kernel.org; Mon, 01 Jun 2020 00:04:31 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:57112) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jfbfq-0005wl-3w; Mon, 01 Jun 2020 00:03:42 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:3771 helo=huawei.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jfbfo-00081y-GG; Mon, 01 Jun 2020 00:03:41 -0400 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 25604FEA423AB5663262; Mon, 1 Jun 2020 12:03:30 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.487.0; Mon, 1 Jun 2020 12:03:21 +0800 From: Keqian Zhu To: , Subject: [PATCH] migration: Count new_dirty instead of real_dirty Date: Mon, 1 Jun 2020 12:02:50 +0800 Message-ID: <20200601040250.38324-1-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected Received-SPF: pass client-ip=45.249.212.191; envelope-from=zhukeqian1@huawei.com; helo=huawei.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/01 00:03:30 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paolo Bonzini , Chao Fan , Keqian Zhu , wanghaibin.wang@huawei.com, Juan Quintela Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" DIRTY_LOG_INITIALLY_ALL_SET feature is on the queue. This fixs the dirty rate calculation for this feature. After introducing this feature, real_dirty_pages is equal to total memory size at begining. This causing wrong dirty rate and false positive throttling. BTW, real dirty rate is not suitable and not very accurate. 1. For not suitable: We mainly concern on the relationship between dirty rate and network bandwidth. Net increasement of dirty pages makes more sense. 2. For not very accurate: With manual dirty log clear, some dirty pages will be cleared during each peroid, our "real dirty rate" is less than real "real dirty rate". Signed-off-by: Keqian Zhu --- include/exec/ram_addr.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index 5e59a3d8d7..af9677e291 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -443,7 +443,7 @@ static inline uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, ram_addr_t start, ram_addr_t length, - uint64_t *real_dirty_pages) + uint64_t *accu_dirty_pages) { ram_addr_t addr; unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS); @@ -469,7 +469,6 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, if (src[idx][offset]) { unsigned long bits = atomic_xchg(&src[idx][offset], 0); unsigned long new_dirty; - *real_dirty_pages += ctpopl(bits); new_dirty = ~dest[k]; dest[k] |= bits; new_dirty &= bits; @@ -502,7 +501,6 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, start + addr + offset, TARGET_PAGE_SIZE, DIRTY_MEMORY_MIGRATION)) { - *real_dirty_pages += 1; long k = (start + addr) >> TARGET_PAGE_BITS; if (!test_and_set_bit(k, dest)) { num_dirty++; @@ -511,6 +509,7 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, } } + *accu_dirty_pages += num_dirty; return num_dirty; } #endif