From patchwork Mon Jun 6 13:56:29 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pekka Enberg X-Patchwork-Id: 852082 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p56Duo6S000748 for ; Mon, 6 Jun 2011 13:56:50 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754802Ab1FFN4f (ORCPT ); Mon, 6 Jun 2011 09:56:35 -0400 Received: from filtteri5.pp.htv.fi ([213.243.153.188]:32923 "EHLO filtteri5.pp.htv.fi" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757227Ab1FFN4e (ORCPT ); Mon, 6 Jun 2011 09:56:34 -0400 Received: from localhost (localhost [127.0.0.1]) by filtteri5.pp.htv.fi (Postfix) with ESMTP id 4CE3C5A6263; Mon, 6 Jun 2011 16:56:33 +0300 (EEST) X-Virus-Scanned: Debian amavisd-new at pp.htv.fi Received: from smtp6.welho.com ([213.243.153.40]) by localhost (filtteri5.pp.htv.fi [213.243.153.188]) (amavisd-new, port 10024) with ESMTP id gJnoLNSEDPej; Mon, 6 Jun 2011 16:56:31 +0300 (EEST) Received: from localhost.localdomain (cs181148025.pp.htv.fi [82.181.148.25]) by smtp6.welho.com (Postfix) with ESMTP id DCE135BC004; Mon, 6 Jun 2011 16:56:30 +0300 (EEST) From: Pekka Enberg To: kvm@vger.kernel.org Cc: Pekka Enberg , Alexander Graf , Cyrill Gorcunov , Ingo Molnar , John Floren , Sasha Levin Subject: [PATCH v2] kvm tools, vesa: Use guest-mapped memory for framebuffer Date: Mon, 6 Jun 2011 16:56:29 +0300 Message-Id: <1307368589-11682-1-git-send-email-penberg@kernel.org> X-Mailer: git-send-email 1.7.0.4 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Mon, 06 Jun 2011 13:56:50 +0000 (UTC) This patch converts hw/vesa.c to use guest-mapped memory for framebuffer and drops the slow MMIO emulation. This speeds up framebuffer accesses considerably. Please note that this can be optimized even more with the KVM_GET_DIRTY_LOG ioctl() as explained by Alexander Graf. Cc: Alexander Graf Cc: Cyrill Gorcunov Cc: Ingo Molnar Cc: John Floren Cc: Sasha Levin Signed-off-by: Pekka Enberg --- v1 -> v2: Fix mem slot index passed to KVM_SET_USER_MEMORY_REGION tools/kvm/hw/vesa.c | 17 +++++------------ tools/kvm/include/kvm/kvm.h | 3 +++ tools/kvm/kvm.c | 10 +++++----- 3 files changed, 13 insertions(+), 17 deletions(-) diff --git a/tools/kvm/hw/vesa.c b/tools/kvm/hw/vesa.c index 48d31ce..71322fc 100644 --- a/tools/kvm/hw/vesa.c +++ b/tools/kvm/hw/vesa.c @@ -8,6 +8,7 @@ #include "kvm/irq.h" #include "kvm/kvm.h" #include "kvm/pci.h" +#include #include #include @@ -40,14 +41,6 @@ static struct pci_device_header vesa_pci_device = { .bar[1] = VESA_MEM_ADDR | PCI_BASE_ADDRESS_SPACE_MEMORY, }; -static void vesa_mmio_callback(u64 addr, u8 *data, u32 len, u8 is_write) -{ - if (!is_write) - return; - - fb__write(addr, data, len); -} - static struct framebuffer vesafb; struct framebuffer *vesa__init(struct kvm *kvm) @@ -65,12 +58,12 @@ struct framebuffer *vesa__init(struct kvm *kvm) vesa_pci_device.bar[0] = vesa_base_addr | PCI_BASE_ADDRESS_SPACE_IO; pci__register(&vesa_pci_device, dev); - kvm__register_mmio(kvm, VESA_MEM_ADDR, VESA_MEM_SIZE, &vesa_mmio_callback); - - mem = calloc(1, VESA_MEM_SIZE); - if (!mem) + mem = mmap(NULL, VESA_MEM_SIZE, PROT_RW, MAP_ANON_NORESERVE, -1, 0); + if (mem == MAP_FAILED) return NULL; + kvm__register_mem(kvm, VESA_MEM_ADDR, VESA_MEM_SIZE, mem); + vesafb = (struct framebuffer) { .width = VESA_WIDTH, .height = VESA_HEIGHT, diff --git a/tools/kvm/include/kvm/kvm.h b/tools/kvm/include/kvm/kvm.h index 55551de..17b7557 100644 --- a/tools/kvm/include/kvm/kvm.h +++ b/tools/kvm/include/kvm/kvm.h @@ -21,6 +21,8 @@ struct kvm { int nrcpus; /* Number of cpus to run */ + u32 mem_slots; /* for KVM_SET_USER_MEMORY_REGION */ + u64 ram_size; void *ram_start; @@ -49,6 +51,7 @@ void kvm__stop_timer(struct kvm *kvm); void kvm__irq_line(struct kvm *kvm, int irq, int level); bool kvm__emulate_io(struct kvm *kvm, u16 port, void *data, int direction, int size, u32 count); bool kvm__emulate_mmio(struct kvm *kvm, u64 phys_addr, u8 *data, u32 len, u8 is_write); +void kvm__register_mem(struct kvm *kvm, u64 guest_phys, u64 size, void *userspace_addr); bool kvm__register_mmio(struct kvm *kvm, u64 phys_addr, u64 phys_addr_len, void (*kvm_mmio_callback_fn)(u64 addr, u8 *data, u32 len, u8 is_write)); bool kvm__deregister_mmio(struct kvm *kvm, u64 phys_addr); void kvm__pause(void); diff --git a/tools/kvm/kvm.c b/tools/kvm/kvm.c index 54e3203..65e94a1 100644 --- a/tools/kvm/kvm.c +++ b/tools/kvm/kvm.c @@ -162,13 +162,13 @@ static bool kvm__cpu_supports_vm(void) return regs.ecx & (1 << feature); } -static void kvm_register_mem_slot(struct kvm *kvm, u32 slot, u64 guest_phys, u64 size, void *userspace_addr) +void kvm__register_mem(struct kvm *kvm, u64 guest_phys, u64 size, void *userspace_addr) { struct kvm_userspace_memory_region mem; int ret; mem = (struct kvm_userspace_memory_region) { - .slot = slot, + .slot = kvm->mem_slots++, .guest_phys_addr = guest_phys, .memory_size = size, .userspace_addr = (unsigned long)userspace_addr, @@ -200,7 +200,7 @@ void kvm__init_ram(struct kvm *kvm) phys_size = kvm->ram_size; host_mem = kvm->ram_start; - kvm_register_mem_slot(kvm, 0, phys_start, phys_size, host_mem); + kvm__register_mem(kvm, phys_start, phys_size, host_mem); } else { /* First RAM range from zero to the PCI gap: */ @@ -208,7 +208,7 @@ void kvm__init_ram(struct kvm *kvm) phys_size = KVM_32BIT_GAP_START; host_mem = kvm->ram_start; - kvm_register_mem_slot(kvm, 0, phys_start, phys_size, host_mem); + kvm__register_mem(kvm, phys_start, phys_size, host_mem); /* Second RAM range from 4GB to the end of RAM: */ @@ -216,7 +216,7 @@ void kvm__init_ram(struct kvm *kvm) phys_size = kvm->ram_size - phys_size; host_mem = kvm->ram_start + phys_start; - kvm_register_mem_slot(kvm, 1, phys_start, phys_size, host_mem); + kvm__register_mem(kvm, phys_start, phys_size, host_mem); } }