Message ID | 20210202185734.1680553-24-bgardon@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Allow parallel MMU operations with TDP MMU | expand |
On 02/02/21 19:57, Ben Gardon wrote: > > - write_lock(&vcpu->kvm->mmu_lock); > + > + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) > + read_lock(&vcpu->kvm->mmu_lock); > + else > + write_lock(&vcpu->kvm->mmu_lock); > + I'd like to make this into two helper functions, but I'm not sure about the naming: - kvm_mmu_read_lock_for_root/kvm_mmu_read_unlock_for_root: not precise because it's really write-locked for shadow MMU roots - kvm_mmu_lock_for_root/kvm_mmu_unlock_for_root: not clear that TDP MMU operations will need to operate in shared-lock mode I prefer the first because at least it's the conservative option, but I'm open to other opinions and suggestions. Paolo
On Wed, Feb 3, 2021 at 4:40 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > > On 02/02/21 19:57, Ben Gardon wrote: > > > > - write_lock(&vcpu->kvm->mmu_lock); > > + > > + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) > > + read_lock(&vcpu->kvm->mmu_lock); > > + else > > + write_lock(&vcpu->kvm->mmu_lock); > > + > > I'd like to make this into two helper functions, but I'm not sure about > the naming: > > - kvm_mmu_read_lock_for_root/kvm_mmu_read_unlock_for_root: not precise > because it's really write-locked for shadow MMU roots > > - kvm_mmu_lock_for_root/kvm_mmu_unlock_for_root: not clear that TDP MMU > operations will need to operate in shared-lock mode > > I prefer the first because at least it's the conservative option, but > I'm open to other opinions and suggestions. > > Paolo > Of the above two options, I like the second one, though I'd be happy with either. I agree the first is more conservative, in that it's clear the MMU lock could be shared. It feels a little misleading, though to have read in the name of the function but then acquire the write lock, especially since there's code below that which expects the write lock. I don't know of a good way to abstract this into a helper without some comments to make it clear what's going on, but maybe there's a slightly more open-coded compromise: if (!kvm_mmu_read_lock_for_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) write_lock(&vcpu->kvm->mmu_lock); or enum kvm_mmu_lock_mode lock_mode = get_mmu_lock_mode_for_root(vcpu->kvm, vcpu->arch.mmu->root_hpa); .... kvm_mmu_lock_for_mode(lock_mode); Not sure if either of those are actually clearer, but the latter trends in the direction the RCF took, having an enum to capture read/write and whether or not yo yield in a lock mode parameter.
On 03/02/21 18:46, Ben Gardon wrote: > enum kvm_mmu_lock_mode lock_mode = > get_mmu_lock_mode_for_root(vcpu->kvm, vcpu->arch.mmu->root_hpa); > .... > kvm_mmu_lock_for_mode(lock_mode); > > Not sure if either of those are actually clearer, but the latter > trends in the direction the RCF took, having an enum to capture > read/write and whether or not yo yield in a lock mode parameter. Could be a possibility. Also: enum kvm_mmu_lock_mode lock_mode = kvm_mmu_lock_for_root(vcpu->kvm, vcpu->arch.mmu->root_hpa); kvm_mmu_unlock(vcpu->kvm, lock_mode); Anyway it can be done on top. Paolo
On Wed, Feb 03, 2021, Paolo Bonzini wrote: > On 03/02/21 18:46, Ben Gardon wrote: > > enum kvm_mmu_lock_mode lock_mode = > > get_mmu_lock_mode_for_root(vcpu->kvm, vcpu->arch.mmu->root_hpa); > > .... > > kvm_mmu_lock_for_mode(lock_mode); > > > > Not sure if either of those are actually clearer, but the latter > > trends in the direction the RCF took, having an enum to capture > > read/write and whether or not yo yield in a lock mode parameter. > > Could be a possibility. Also: > > enum kvm_mmu_lock_mode lock_mode = > kvm_mmu_lock_for_root(vcpu->kvm, vcpu->arch.mmu->root_hpa); > > kvm_mmu_unlock(vcpu->kvm, lock_mode); > > Anyway it can be done on top. Maybe go with a literal name, unless we expect additional usage? E.g. kvm_mmu_(un)lock_for_page_fault() isn't terrible. I'm not a fan of the kvm_mmu_lock_for_root() variants. "for_root" doesn't have an obvious connection to the page fault handler or to the read/shared mode of the TDP. But, the name is also specific enough to pique my curiosity and make me wonder what's it's doing.
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b4d6709c240e..3d181a2b2485 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3724,7 +3724,12 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, return r; r = RET_PF_RETRY; - write_lock(&vcpu->kvm->mmu_lock); + + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) + read_lock(&vcpu->kvm->mmu_lock); + else + write_lock(&vcpu->kvm->mmu_lock); + if (mmu_notifier_retry(vcpu->kvm, mmu_seq)) goto out_unlock; r = make_mmu_pages_available(vcpu); @@ -3739,7 +3744,10 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, prefault, is_tdp); out_unlock: - write_unlock(&vcpu->kvm->mmu_lock); + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) + read_unlock(&vcpu->kvm->mmu_lock); + else + write_unlock(&vcpu->kvm->mmu_lock); kvm_release_pfn_clean(pfn); return r; }