From patchwork Fri Jun 25 23:25:09 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 108177 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o5PNPYMa001521 for ; Fri, 25 Jun 2010 23:25:38 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756863Ab0FYXZf (ORCPT ); Fri, 25 Jun 2010 19:25:35 -0400 Received: from cantor2.suse.de ([195.135.220.15]:54795 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756746Ab0FYXZU (ORCPT ); Fri, 25 Jun 2010 19:25:20 -0400 Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.221.2]) by mx2.suse.de (Postfix) with ESMTP id 2F61B8ACEB; Sat, 26 Jun 2010 01:25:15 +0200 (CEST) From: Alexander Graf To: kvm-ppc@vger.kernel.org Cc: KVM list , linuxppc-dev Subject: [PATCH 21/26] KVM: PPC: Introduce kvm_tmp framework Date: Sat, 26 Jun 2010 01:25:09 +0200 Message-Id: <1277508314-915-22-git-send-email-agraf@suse.de> X-Mailer: git-send-email 1.6.0.2 In-Reply-To: <1277508314-915-1-git-send-email-agraf@suse.de> References: <1277508314-915-1-git-send-email-agraf@suse.de> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Fri, 25 Jun 2010 23:25:38 +0000 (UTC) diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c index b091f94..7e8fe6f 100644 --- a/arch/powerpc/kernel/kvm.c +++ b/arch/powerpc/kernel/kvm.c @@ -64,6 +64,8 @@ #define KVM_INST_TLBSYNC 0x7c00046c static bool kvm_patching_worked = true; +static char kvm_tmp[1024 * 1024]; +static int kvm_tmp_index; static void kvm_patch_ins_ld(u32 *inst, long addr, u32 rt) { @@ -98,6 +100,23 @@ static void kvm_patch_ins_nop(u32 *inst) *inst = KVM_INST_NOP; } +static u32 *kvm_alloc(int len) +{ + u32 *p; + + if ((kvm_tmp_index + len) > ARRAY_SIZE(kvm_tmp)) { + printk(KERN_ERR "KVM: No more space (%d + %d)\n", + kvm_tmp_index, len); + kvm_patching_worked = false; + return NULL; + } + + p = (void*)&kvm_tmp[kvm_tmp_index]; + kvm_tmp_index += len; + + return p; +} + static void kvm_map_magic_page(void *data) { kvm_hypercall2(KVM_HC_PPC_MAP_MAGIC_PAGE, @@ -197,12 +216,27 @@ static void kvm_use_magic_page(void) kvm_check_ins(p); } +static void kvm_free_tmp(void) +{ + unsigned long start, end; + + start = (ulong)&kvm_tmp[kvm_tmp_index + (PAGE_SIZE - 1)] & PAGE_MASK; + end = (ulong)&kvm_tmp[ARRAY_SIZE(kvm_tmp)] & PAGE_MASK; + + /* Free the tmp space we don't need */ + for (; start < end; start += PAGE_SIZE) { + ClearPageReserved(virt_to_page(start)); + init_page_count(virt_to_page(start)); + free_page(start); + totalram_pages++; + } +} + static int __init kvm_guest_init(void) { - char *p; if (!kvm_para_available()) - return 0; + goto free_tmp; if (kvm_para_has_feature(KVM_FEATURE_MAGIC_PAGE)) kvm_use_magic_page(); @@ -210,6 +244,9 @@ static int __init kvm_guest_init(void) printk(KERN_INFO "KVM: Live patching for a fast VM %s\n", kvm_patching_worked ? "worked" : "failed"); +free_tmp: + kvm_free_tmp(); + return 0; }