diff mbox

[08/30] nVMX: Fix local_vcpus_link handling

Message ID 201105080819.p488JIgW017952@rice.haifa.ibm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Nadav Har'El May 8, 2011, 8:19 a.m. UTC
In VMX, before we bring down a CPU we must VMCLEAR all VMCSs loaded on it
because (at least in theory) the processor might not have written all of its
content back to memory. Since a patch from June 26, 2008, this is done using
a per-cpu "vcpus_on_cpu" linked list of vcpus loaded on each CPU.

The problem is that with nested VMX, we no longer have the concept of a
vcpu being loaded on a cpu: A vcpu has multiple VMCSs (one for L1, others for
each L2), and each of those may be have been last loaded on a different cpu.

This trivial patch changes the code to keep on vcpus_on_cpu only L1 VMCSs.
This fixes crashes on L1 shutdown caused by incorrectly maintaing the linked
lists.

It is not a complete solution, though. It doesn't flush the inactive L1 or L2
VMCSs loaded on a CPU which is being shutdown. Doing this correctly will
probably require replacing the vcpu linked list by a link list of "saved_vcms"
objects (VMCS, cpu and launched), and it is left as a TODO.

Signed-off-by: Nadav Har'El <nyh@il.ibm.com>
---
 arch/x86/kvm/vmx.c |   14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

--- .before/arch/x86/kvm/vmx.c	2011-05-08 10:43:18.000000000 +0300
+++ .after/arch/x86/kvm/vmx.c	2011-05-08 10:43:18.000000000 +0300
@@ -638,7 +638,9 @@  static void __vcpu_clear(void *arg)
 		vmcs_clear(vmx->vmcs);
 	if (per_cpu(current_vmcs, cpu) == vmx->vmcs)
 		per_cpu(current_vmcs, cpu) = NULL;
-	list_del(&vmx->local_vcpus_link);
+	/* TODO: currently, local_vcpus_link is just for L1 VMCSs */
+	if (!is_guest_mode(&vmx->vcpu))
+		list_del(&vmx->local_vcpus_link);
 	vmx->vcpu.cpu = -1;
 	vmx->launched = 0;
 }
@@ -1100,8 +1102,10 @@  static void vmx_vcpu_load(struct kvm_vcp
 
 		kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
 		local_irq_disable();
-		list_add(&vmx->local_vcpus_link,
-			 &per_cpu(vcpus_on_cpu, cpu));
+		/* TODO: currently, local_vcpus_link is just for L1 VMCSs */
+		if (!is_guest_mode(&vmx->vcpu))
+			list_add(&vmx->local_vcpus_link,
+				 &per_cpu(vcpus_on_cpu, cpu));
 		local_irq_enable();
 
 		/*
@@ -1806,7 +1810,9 @@  static void vmclear_local_vcpus(void)
 
 	list_for_each_entry_safe(vmx, n, &per_cpu(vcpus_on_cpu, cpu),
 				 local_vcpus_link)
-		__vcpu_clear(vmx);
+		/* TODO: currently, local_vcpus_link is just for L1 VMCSs */
+		if (!is_guest_mode(&vmx->vcpu))
+			__vcpu_clear(vmx);
 }