From patchwork Mon Jun 20 13:42:43 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Gollub X-Patchwork-Id: 897462 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.4) with ESMTP id p5KDp04l022078 for ; Mon, 20 Jun 2011 13:51:06 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754774Ab1FTNu7 (ORCPT ); Mon, 20 Jun 2011 09:50:59 -0400 Received: from schattenjagd.systems.b1-systems.de ([217.11.58.34]:54635 "EHLO mx1.b1-systems.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754740Ab1FTNu4 (ORCPT ); Mon, 20 Jun 2011 09:50:56 -0400 Received: from marvin.site (ppp-88-217-124-96.dynamic.mnet-online.de [88.217.124.96]) by mx1.b1-systems.de (Postfix) with ESMTPSA id E09F54084; Mon, 20 Jun 2011 15:45:01 +0200 (CEST) From: Daniel Gollub To: kvm@vger.kernel.org Cc: Daniel Gollub Subject: [PATCH 1/2] Handle KVM hypercall panic on guest crash Date: Mon, 20 Jun 2011 15:42:43 +0200 Message-Id: <1308577364-17650-2-git-send-email-gollub@b1-systems.de> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1308577364-17650-1-git-send-email-gollub@b1-systems.de> References: <1308577364-17650-1-git-send-email-gollub@b1-systems.de> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Mon, 20 Jun 2011 13:51:07 +0000 (UTC) If the guest crash and the crash/panic handler calls the KVM panic hypercall the KVM API notifies this with KVM_EXIT_PANIC. The VM status gets extended with "panic" to obtain this status via the QEMU monitor. --- kvm-all.c | 4 ++++ kvm/include/linux/kvm.h | 1 + monitor.c | 8 ++++++-- sysemu.h | 1 + vl.c | 2 ++ 5 files changed, 14 insertions(+), 2 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index 629f727..9771f91 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -1029,6 +1029,10 @@ int kvm_cpu_exec(CPUState *env) qemu_system_reset_request(); ret = EXCP_INTERRUPT; break; + case KVM_EXIT_PANIC: + panic = 1; + ret = 1; + break; case KVM_EXIT_UNKNOWN: fprintf(stderr, "KVM: unknown exit, hardware reason %" PRIx64 "\n", (uint64_t)run->hw.hardware_exit_reason); diff --git a/kvm/include/linux/kvm.h b/kvm/include/linux/kvm.h index e46729e..207871c 100644 --- a/kvm/include/linux/kvm.h +++ b/kvm/include/linux/kvm.h @@ -161,6 +161,7 @@ struct kvm_pit_config { #define KVM_EXIT_NMI 16 #define KVM_EXIT_INTERNAL_ERROR 17 #define KVM_EXIT_OSI 18 +#define KVM_EXIT_PANIC 19 /* For KVM_EXIT_INTERNAL_ERROR */ #define KVM_INTERNAL_ERROR_EMULATION 1 diff --git a/monitor.c b/monitor.c index 59a3e76..fd6a881 100644 --- a/monitor.c +++ b/monitor.c @@ -2599,13 +2599,17 @@ static void do_info_status_print(Monitor *mon, const QObject *data) monitor_printf(mon, "paused"); } + if (qdict_get_bool(qdict, "panic")) { + monitor_printf(mon, " (panic)"); + } + monitor_printf(mon, "\n"); } static void do_info_status(Monitor *mon, QObject **ret_data) { - *ret_data = qobject_from_jsonf("{ 'running': %i, 'singlestep': %i }", - vm_running, singlestep); + *ret_data = qobject_from_jsonf("{ 'running': %i, 'singlestep': %i, 'panic': %i }", + vm_running, singlestep, panic); } static qemu_acl *find_acl(Monitor *mon, const char *name) diff --git a/sysemu.h b/sysemu.h index a42d83f..8ab0168 100644 --- a/sysemu.h +++ b/sysemu.h @@ -12,6 +12,7 @@ extern const char *bios_name; extern int vm_running; +extern int panic; extern const char *qemu_name; extern uint8_t qemu_uuid[]; int qemu_uuid_parse(const char *str, uint8_t *uuid); diff --git a/vl.c b/vl.c index e0191e1..1d9a068 100644 --- a/vl.c +++ b/vl.c @@ -185,6 +185,7 @@ int mem_prealloc = 0; /* force preallocation of physical target memory */ int nb_nics; NICInfo nd_table[MAX_NICS]; int vm_running; +int panic = 0; int autostart; int incoming_expected; /* Started with -incoming and waiting for incoming */ static int rtc_utc = 1; @@ -1407,6 +1408,7 @@ static void main_loop(void) pause_all_vcpus(); cpu_synchronize_all_states(); qemu_system_reset(); + panic = 0; resume_all_vcpus(); } if (qemu_powerdown_requested()) {