From patchwork Fri Feb 5 10:18:21 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: OHMURA Kei X-Patchwork-Id: 77316 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o15AgnvH025968 for ; Fri, 5 Feb 2010 10:42:50 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753526Ab0BEKms (ORCPT ); Fri, 5 Feb 2010 05:42:48 -0500 Received: from tama500.ecl.ntt.co.jp ([129.60.39.148]:53803 "EHLO tama500.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752345Ab0BEKmr (ORCPT ); Fri, 5 Feb 2010 05:42:47 -0500 X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Fri, 05 Feb 2010 10:42:50 +0000 (UTC) X-Greylist: delayed 1511 seconds by postgrey-1.27 at vger.kernel.org; Fri, 05 Feb 2010 05:42:47 EST Received: from mfs6.rdh.ecl.ntt.co.jp (mfs6.rdh.ecl.ntt.co.jp [129.60.39.149]) by tama500.ecl.ntt.co.jp (8.14.3/8.14.3) with ESMTP id o15AH8sL016696; Fri, 5 Feb 2010 19:17:08 +0900 (JST) Received: from mfs6.rdh.ecl.ntt.co.jp (localhost [127.0.0.1]) by mfs6.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id A5AB95E64; Fri, 5 Feb 2010 19:17:08 +0900 (JST) Received: from dmailsv1.y.ecl.ntt.co.jp (dmailsv1.y.ecl.ntt.co.jp [129.60.53.14]) by mfs6.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id 96F9C5C66; Fri, 5 Feb 2010 19:17:08 +0900 (JST) Received: from mailsv02.y.ecl.ntt.co.jp by dmailsv1.y.ecl.ntt.co.jp (8.14.3/dmailsv1-2.1) with ESMTP id o15AH8PQ019412; Fri, 5 Feb 2010 19:17:08 +0900 (JST) Received: from localhost by mailsv02.y.ecl.ntt.co.jp (8.14.3/Lab-1.7) with ESMTP id o15AH8JR006800; Fri, 5 Feb 2010 19:17:08 +0900 (JST) Message-ID: <4B6BF06D.1090909@lab.ntt.co.jp> Date: Fri, 05 Feb 2010 19:18:21 +0900 From: OHMURA Kei User-Agent: Thunderbird 2.0.0.23 (Windows/20090812) MIME-Version: 1.0 To: kvm@vger.kernel.org, qemu-devel@nongnu.org CC: avi@redhat.com, ohmura.kei@lab.ntt.co.jp Subject: [PATCH] qemu-kvm: Speed up of the dirty-bitmap-traveling Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org diff --git a/qemu-kvm.c b/qemu-kvm.c index a305907..5459cdd 100644 --- a/qemu-kvm.c +++ b/qemu-kvm.c @@ -2433,22 +2433,21 @@ int kvm_physical_memory_set_dirty_tracking(int enable) } /* get kvm's dirty pages bitmap and update qemu's */ -static int kvm_get_dirty_pages_log_range(unsigned long start_addr, - unsigned char *bitmap, - unsigned long offset, - unsigned long mem_size) +static void kvm_get_dirty_pages_log_range_by_byte(unsigned int start, + unsigned int end, + unsigned char *bitmap, + unsigned long offset) { unsigned int i, j, n = 0; unsigned char c; unsigned long page_number, addr, addr1; ram_addr_t ram_addr; - unsigned int len = ((mem_size / TARGET_PAGE_SIZE) + 7) / 8; /* * bitmap-traveling is faster than memory-traveling (for addr...) * especially when most of the memory is not dirty. */ - for (i = 0; i < len; i++) { + for (i = start; i < end; i++) { c = bitmap[i]; while (c > 0) { j = ffsl(c) - 1; @@ -2461,13 +2460,49 @@ static int kvm_get_dirty_pages_log_range(unsigned long start_addr, n++; } } +} + +static int kvm_get_dirty_pages_log_range_by_long(unsigned long start_addr, + unsigned char *bitmap, + unsigned long offset, + unsigned long mem_size) +{ + unsigned int i; + unsigned int len; + unsigned long *bitmap_ul = (unsigned long *)bitmap; + + /* bitmap-traveling by long size is faster than by byte size + * especially when most of memory is not dirty. + * bitmap should be long-size aligned for traveling by long. + */ + if (((unsigned long)bitmap & (TARGET_LONG_SIZE - 1)) == 0) { + len = ((mem_size / TARGET_PAGE_SIZE) + TARGET_LONG_BITS - 1) / + TARGET_LONG_BITS; + for (i = 0; i < len; i++) + if (bitmap_ul[i] != 0) + kvm_get_dirty_pages_log_range_by_byte(i * TARGET_LONG_SIZE, + (i + 1) * TARGET_LONG_SIZE, bitmap, offset); + /* + * We will check the remaining dirty-bitmap, + * when the mem_size is not a multiple of TARGET_LONG_SIZE. + */ + if ((mem_size & (TARGET_LONG_SIZE - 1)) != 0) { + len = ((mem_size / TARGET_PAGE_SIZE) + 7) / 8; + kvm_get_dirty_pages_log_range_by_byte(i * TARGET_LONG_SIZE, + len, bitmap, offset); + } + } else { /* slow path: traveling by byte. */ + len = ((mem_size / TARGET_PAGE_SIZE) + 7) / 8; + kvm_get_dirty_pages_log_range_by_byte(0, len, bitmap, offset); + } + return 0; } static int kvm_get_dirty_bitmap_cb(unsigned long start, unsigned long len, void *bitmap, void *opaque) { - return kvm_get_dirty_pages_log_range(start, bitmap, start, len); + return kvm_get_dirty_pages_log_range_by_long(start, bitmap, start, len); } /*