Message ID | 20250224235542.2562848-5-seanjc@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | KVM: x86: nVMX IRQ fix and VM teardown cleanups | expand |
Context | Check | Description |
---|---|---|
bjorn/pre-ci_am | success | Success |
bjorn/build-rv32-defconfig | success | build-rv32-defconfig |
bjorn/build-rv64-clang-allmodconfig | success | build-rv64-clang-allmodconfig |
bjorn/build-rv64-gcc-allmodconfig | success | build-rv64-gcc-allmodconfig |
bjorn/build-rv64-nommu-k210-defconfig | success | build-rv64-nommu-k210-defconfig |
bjorn/build-rv64-nommu-k210-virt | success | build-rv64-nommu-k210-virt |
bjorn/checkpatch | success | checkpatch |
bjorn/dtb-warn-rv64 | success | dtb-warn-rv64 |
bjorn/header-inline | success | header-inline |
bjorn/kdoc | success | kdoc |
bjorn/module-param | success | module-param |
bjorn/verify-fixes | success | verify-fixes |
bjorn/verify-signedoff | success | verify-signedoff |
On Mon, Feb 24, 2025 at 03:55:39PM -0800, Sean Christopherson wrote: > Don't load (and then put) a vCPU when unloading its MMU during VM > destruction, as nothing in kvm_mmu_unload() accesses vCPU state beyond the > root page/address of each MMU, i.e. can't possible need to run with the > vCPU loaded. > > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > arch/x86/kvm/x86.c | 9 +-------- > 1 file changed, 1 insertion(+), 8 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 045c61cc7e54..9978ed4c0917 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -12767,13 +12767,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) > return ret; > } > > -static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu) > -{ > - vcpu_load(vcpu); > - kvm_mmu_unload(vcpu); > - vcpu_put(vcpu); > -} > - > static void kvm_unload_vcpu_mmus(struct kvm *kvm) > { > unsigned long i; > @@ -12781,7 +12774,7 @@ static void kvm_unload_vcpu_mmus(struct kvm *kvm) > > kvm_for_each_vcpu(i, vcpu, kvm) { > kvm_clear_async_pf_completion_queue(vcpu); > - kvm_unload_vcpu_mmu(vcpu); > + kvm_mmu_unload(vcpu); What about just dropping kvm_unload_vcpu_mmu() here? kvm_mmu_unload() will be invoked again in kvm_mmu_destroy(). kvm_arch_vcpu_destroy() --> kvm_mmu_destroy() --> kvm_mmu_unload(). > } > } > > -- > 2.48.1.658.g4767266eb4-goog >
On Tue, Feb 25, 2025, Yan Zhao wrote: > On Mon, Feb 24, 2025 at 03:55:39PM -0800, Sean Christopherson wrote: > > Don't load (and then put) a vCPU when unloading its MMU during VM > > destruction, as nothing in kvm_mmu_unload() accesses vCPU state beyond the > > root page/address of each MMU, i.e. can't possible need to run with the > > vCPU loaded. > > > > Signed-off-by: Sean Christopherson <seanjc@google.com> > > --- > > arch/x86/kvm/x86.c | 9 +-------- > > 1 file changed, 1 insertion(+), 8 deletions(-) > > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > index 045c61cc7e54..9978ed4c0917 100644 > > --- a/arch/x86/kvm/x86.c > > +++ b/arch/x86/kvm/x86.c > > @@ -12767,13 +12767,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) > > return ret; > > } > > > > -static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu) > > -{ > > - vcpu_load(vcpu); > > - kvm_mmu_unload(vcpu); > > - vcpu_put(vcpu); > > -} > > - > > static void kvm_unload_vcpu_mmus(struct kvm *kvm) > > { > > unsigned long i; > > @@ -12781,7 +12774,7 @@ static void kvm_unload_vcpu_mmus(struct kvm *kvm) > > > > kvm_for_each_vcpu(i, vcpu, kvm) { > > kvm_clear_async_pf_completion_queue(vcpu); > > - kvm_unload_vcpu_mmu(vcpu); > > + kvm_mmu_unload(vcpu); > What about just dropping kvm_unload_vcpu_mmu() here? > kvm_mmu_unload() will be invoked again in kvm_mmu_destroy(). > > kvm_arch_vcpu_destroy() --> kvm_mmu_destroy() --> kvm_mmu_unload(). Ugh, I missed that there's yet another call to kvm_mmu_unload(). I definitely agree with dropping the first kvm_mmu_load(), but I'll do it in a follow-up patch so that all three changes are isolated (not doing the load/put, doing unload as part of vCPU destruction, doing unload only once at the end). And looking at both calls to kvm_mmu_unload(), I suspect that grabbing kvm->srcu around kvm_mmu_destroy() is unnecessary. I'll try cleaning that up as well.
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 045c61cc7e54..9978ed4c0917 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12767,13 +12767,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) return ret; } -static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu) -{ - vcpu_load(vcpu); - kvm_mmu_unload(vcpu); - vcpu_put(vcpu); -} - static void kvm_unload_vcpu_mmus(struct kvm *kvm) { unsigned long i; @@ -12781,7 +12774,7 @@ static void kvm_unload_vcpu_mmus(struct kvm *kvm) kvm_for_each_vcpu(i, vcpu, kvm) { kvm_clear_async_pf_completion_queue(vcpu); - kvm_unload_vcpu_mmu(vcpu); + kvm_mmu_unload(vcpu); } }
Don't load (and then put) a vCPU when unloading its MMU during VM destruction, as nothing in kvm_mmu_unload() accesses vCPU state beyond the root page/address of each MMU, i.e. can't possible need to run with the vCPU loaded. Signed-off-by: Sean Christopherson <seanjc@google.com> --- arch/x86/kvm/x86.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-)