From patchwork Mon Feb 8 11:23:37 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: OHMURA Kei X-Patchwork-Id: 77714 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o18BO6XD007407 for ; Mon, 8 Feb 2010 11:24:06 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750832Ab0BHLYD (ORCPT ); Mon, 8 Feb 2010 06:24:03 -0500 Received: from tama50.ecl.ntt.co.jp ([129.60.39.147]:39522 "EHLO tama50.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750722Ab0BHLYB (ORCPT ); Mon, 8 Feb 2010 06:24:01 -0500 Received: from mfs5.rdh.ecl.ntt.co.jp (mfs5.rdh.ecl.ntt.co.jp [129.60.39.144]) by tama50.ecl.ntt.co.jp (8.14.3/8.14.3) with ESMTP id o18BNbbd026342; Mon, 8 Feb 2010 20:23:37 +0900 (JST) Received: from mfs5.rdh.ecl.ntt.co.jp (localhost [127.0.0.1]) by mfs5.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id CFABF6CDD; Mon, 8 Feb 2010 20:23:37 +0900 (JST) Received: from dmailsv1.y.ecl.ntt.co.jp (dmailsv1.y.ecl.ntt.co.jp [129.60.53.14]) by mfs5.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id 8375F6CDC; Mon, 8 Feb 2010 20:23:37 +0900 (JST) Received: from mailsv02.y.ecl.ntt.co.jp by dmailsv1.y.ecl.ntt.co.jp (8.14.3/dmailsv1-2.1) with ESMTP id o18BNbhI007007; Mon, 8 Feb 2010 20:23:37 +0900 (JST) Received: from localhost by mailsv02.y.ecl.ntt.co.jp (8.14.3/Lab-1.7) with ESMTP id o18BNbo0009019; Mon, 8 Feb 2010 20:23:37 +0900 (JST) Message-ID: <4B6FF439.6030006@lab.ntt.co.jp> Date: Mon, 08 Feb 2010 20:23:37 +0900 From: OHMURA Kei User-Agent: Thunderbird 2.0.0.23 (Windows/20090812) MIME-Version: 1.0 To: Jan Kiszka , kvm@vger.kernel.org, qemu-devel@nongnu.org CC: avi@redhat.com, ohmura.kei@lab.ntt.co.jp Subject: Re: [PATCH] qemu-kvm: Speed up of the dirty-bitmap-traveling References: <4B6BF06D.1090909@lab.ntt.co.jp> <4B6C0958.50704@siemens.com> <4B6FABCE.207@lab.ntt.co.jp> In-Reply-To: <4B6FABCE.207@lab.ntt.co.jp> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Mon, 08 Feb 2010 11:24:06 +0000 (UTC) diff --git a/kvm-all.c b/kvm-all.c index 15ec38e..9666843 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -279,9 +279,69 @@ int kvm_set_migration_log(int enable) return 0; } -static int test_le_bit(unsigned long nr, unsigned char *addr) +static inline void kvm_get_dirty_pages_log_range_by_byte(unsigned int start, + unsigned int end, + unsigned char *bitmap, + unsigned long offset) { - return (addr[nr >> 3] >> (nr & 7)) & 1; + unsigned int i, j, n = 0; + unsigned long page_number, addr, addr1; + ram_addr_t ram_addr; + unsigned char c; + + /* + * bitmap-traveling is faster than memory-traveling (for addr...) + * especially when most of the memory is not dirty. + */ + for (i = start; i < end; i++) { + c = bitmap[i]; + while (c > 0) { + j = ffsl(c) - 1; + c &= ~(1u << j); + page_number = i * 8 + j; + addr1 = page_number * TARGET_PAGE_SIZE; + addr = offset + addr1; + ram_addr = cpu_get_physical_page_desc(addr); + cpu_physical_memory_set_dirty(ram_addr); + n++; + } + } +} + +static int kvm_get_dirty_pages_log_range_by_long(unsigned long start_addr, + unsigned char *bitmap, + unsigned long mem_size) +{ + unsigned int i; + unsigned int len; + unsigned long *bitmap_ul = (unsigned long *)bitmap; + + /* bitmap-traveling by long size is faster than by byte size + * especially when most of memory is not dirty. + * bitmap should be long-size aligned for traveling by long. + */ + if (((unsigned long)bitmap & (TARGET_LONG_SIZE - 1)) == 0) { + len = ((mem_size / TARGET_PAGE_SIZE) + TARGET_LONG_BITS - 1) / + TARGET_LONG_BITS; + for (i = 0; i < len; i++) + if (bitmap_ul[i] != 0) + kvm_get_dirty_pages_log_range_by_byte(i * TARGET_LONG_SIZE, + (i + 1) * TARGET_LONG_SIZE, bitmap, start_addr); + /* + * We will check the remaining dirty-bitmap, + * when the mem_size is not a multiple of TARGET_LONG_SIZE. + */ + if ((mem_size & (TARGET_LONG_SIZE - 1)) != 0) { + len = ((mem_size / TARGET_PAGE_SIZE) + 7) / 8; + kvm_get_dirty_pages_log_range_by_byte(i * TARGET_LONG_SIZE, + len, bitmap, start_addr); + } + } else { /* slow path: traveling by byte. */ + len = ((mem_size / TARGET_PAGE_SIZE) + 7) / 8; + kvm_get_dirty_pages_log_range_by_byte(0, len, bitmap, start_addr); + } + + return 0; } /** @@ -297,8 +357,6 @@ int kvm_physical_sync_dirty_bitmap(target_phys_addr_t start_addr, { KVMState *s = kvm_state; unsigned long size, allocated_size = 0; - target_phys_addr_t phys_addr; - ram_addr_t addr; KVMDirtyLog d; KVMSlot *mem; int ret = 0; @@ -327,17 +385,9 @@ int kvm_physical_sync_dirty_bitmap(target_phys_addr_t start_addr, break; } - for (phys_addr = mem->start_addr, addr = mem->phys_offset; - phys_addr < mem->start_addr + mem->memory_size; - phys_addr += TARGET_PAGE_SIZE, addr += TARGET_PAGE_SIZE) { - unsigned char *bitmap = (unsigned char *)d.dirty_bitmap; - unsigned nr = (phys_addr - mem->start_addr) >> TARGET_PAGE_BITS; - - if (test_le_bit(nr, bitmap)) { - cpu_physical_memory_set_dirty(addr); - } - } - start_addr = phys_addr; + kvm_get_dirty_pages_log_range_by_long(mem->start_addr, + d.dirty_bitmap, mem->memory_size); + start_addr = mem->start_addr + mem->memory_size; } qemu_free(d.dirty_bitmap);