diff mbox

nVMX: Fix bug preventing more than two levels of nesting

Message ID 20110602085452.589883806F0@moren.haifa.ibm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Nadav Har'El June 2, 2011, 8:54 a.m. UTC
The nested VMX feature is supposed to fully emulate VMX for the guest. This
(theoretically) not only allows it to run its own guests, but also also
to further emulate VMX for its own guests, and allow arbitrarily deep nesting.

This patch fixes a bug (discovered by Kevin Tian) in handling a VMLAUNCH
by L2, which prevented deeper nesting.

Deeper nesting now works (I only actually tested L3), but is currently
*absurdly* slow, to the point of being unusable.

Signed-off-by: Nadav Har'El <nyh@il.ibm.com>
---
 arch/x86/kvm/vmx.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Marcelo Tosatti June 3, 2011, 2:06 a.m. UTC | #1
On Thu, Jun 02, 2011 at 11:54:52AM +0300, Nadav Har'El wrote:
> The nested VMX feature is supposed to fully emulate VMX for the guest. This
> (theoretically) not only allows it to run its own guests, but also also
> to further emulate VMX for its own guests, and allow arbitrarily deep nesting.
> 
> This patch fixes a bug (discovered by Kevin Tian) in handling a VMLAUNCH
> by L2, which prevented deeper nesting.
> 
> Deeper nesting now works (I only actually tested L3), but is currently
> *absurdly* slow, to the point of being unusable.
> 
> Signed-off-by: Nadav Har'El <nyh@il.ibm.com>

Applied, thanks.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

--- .before/arch/x86/kvm/vmx.c	2011-06-02 10:46:13.000000000 +0300
+++ .after/arch/x86/kvm/vmx.c	2011-06-02 10:46:13.000000000 +0300
@@ -5691,8 +5691,8 @@  static int vmx_handle_exit(struct kvm_vc
 	if (vmx->nested.nested_run_pending)
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
 
-	if (exit_reason == EXIT_REASON_VMLAUNCH ||
-	    exit_reason == EXIT_REASON_VMRESUME)
+	if (!is_guest_mode(vcpu) && (exit_reason == EXIT_REASON_VMLAUNCH ||
+	    exit_reason == EXIT_REASON_VMRESUME))
 		vmx->nested.nested_run_pending = 1;
 	else
 		vmx->nested.nested_run_pending = 0;